Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2018 May 1.
Published in final edited form as: J Public Health Manag Pract. 2017 May-Jun;23(3):269–275. doi: 10.1097/PHH.0000000000000376

Quality of HIV Testing Data Before and After the Implementation of a National Data Quality Assessment and Feedback System

John Beltrami 1, Guoshen Wang 1, Hussain R Usman 1, Lillian S Lin 1
PMCID: PMC5373976  NIHMSID: NIHMS773495  PMID: 26672404

Abstract

Context

In 2010, the Centers for Disease Control and Prevention (CDC) implemented a national data quality assessment and feedback system for CDC-funded HIV testing program data.

Objective

Our objective was to analyze data quality before and after feedback.

Design

Coinciding with required quarterly data submissions to CDC, each health department received data quality feedback reports and a call with CDC to discuss the reports. Data from 2008 to 2011 were analyzed.

Setting

Fifty-nine state and local health departments that were funded for comprehensive HIV prevention services.

Participants

Data collected by a service provider in conjunction with a client receiving HIV testing.

Intervention

National data quality assessment and feedback system.

Main Outcome Measures

Before and after intervention implementation, quality was assessed through the number of new test records reported and the percentage of data values that were neither missing nor invalid. Generalized estimating equations were used to assess the effect of feedback in improving the completeness of variables.

Results

Data were included from 44 health departments. The average number of new records per submission period increased from 197 907 before feedback implementation to 497 753 afterward. Completeness was high before and after feedback for race/ethnicity (99.3% vs 99.3%), current test results (99.1% vs 99.7%), prior testing and results (97.4% vs 97.7%), and receipt of results (91.4% vs 91.2%). Completeness improved for HIV risk (83.6% vs 89.5%), linkage to HIV care (56.0% vs 64.0%), referral to HIV partner services (58.9% vs 62.8%), and referral to HIV prevention services (55.3% vs 63.9%). Calls as part of feedback were associated with improved completeness for HIV risk (adjusted odds ratio [AOR] = 2.28; 95% confidence interval [CI], 1.75–2.96), linkage to HIV care (AOR = 1.60; 95% CI, 1.31–1.96), referral to HIV partner services (AOR = 1.73; 95% CI, 1.43–2.09), and referral to HIV prevention services (AOR = 1.74; 95% CI, 1.43–2.10).

Conclusions

Feedback contributed to increased data quality. CDC and health departments should continue monitoring the data and implement measures to improve variables of low completeness.

Keywords: data quality, feedback system, HIV testing


In 2010, the White House Office of National AIDS Policy released the National HIV/AIDS Strategy (NHAS), which is based on implementing combinations of effective, evidence-based approaches and HIV prevention interventions such as HIV testing.1 In 2011, the Centers for Disease Control and Prevention (CDC) released the High-Impact HIV Prevention (HIP) approach to support NHAS and maximize the impact of HIV prevention.2 HIV testing is considered the first step in the HIV continuum of care that includes linkage to care, retention in care, and adherence to medications, which leads to a decrease in viral load, decrease in HIV transmission to others, and ultimately a decrease in HIV incidence.3,4

HIV testing data from HIV prevention programs funded by CDC have been used to address NHAS and HIP.5,6 Program data are usually collected by a service provider in conjunction with a health service delivery to a client and are not routinely collected or validated through standard surveillance or epidemiologic methods. Program data, however, provide useful information that is not otherwise available at the national or local levels for planning, policy, and decision making, which at times needs to be done urgently and before surveillance or epidemiologic data are considered available for use. CDC collects 2 types of HIV testing data from health department programs funded by CDC for comprehensive HIV prevention services provided by health departments and community-based organizations: tables of summary counts of information submitted to CDC by health departments via required progress reports and national HIV prevention program monitoring and evaluation (NHM&E) data on individual HIV tests. Health departments are responsible to submit to CDC files containing line-listed HIV testing data.

Regardless of data source, data quality is important, because higher-quality data increase the credibility of the data and confidence in the use of the data,7 allowing for better evidence-based program planning, policies, and decision making. CDC wanted to implement a new HIV testing data quality assessment and feedback system that would monitor the number of records and completeness of variables that are routinely analyzed and used by CDC. Consequently, CDC with 5 health departments in March 2010 piloted an NHM&E testing data quality assessment and feedback system that would be more rigorous than prior data quality activities. The results of the pilot showed that the new system was considered feasible and acceptable to CDC and the health departments. CDC then implemented a national system in June 2010 for the 59 health departments that were funded at that time for comprehensive HIV prevention services.

Standard information technology, data management, and epidemiologic tools and methods were used on an ongoing basis to monitor and improve the program data as part of a continuous quality improvement intervention8 to strengthen accountability and improve program performance.9 Underlying assumptions of this work are that providing health departments periodic quality assurance feedback leads to improved data quality1014 and that rigorous monitoring and feedback are critical to program success and sustainability.15 The primary objective of this article was to conduct an analysis of data quality before and after feedback.

Methods

Data quality assessment and feedback system

All NHM&E testing data files submitted to CDC underwent an initial data management check in a newly created SAS data set for the correct XML format, the 7 system-required variables, duplicate form identification numbers, updated records, and repeat records. The results of this process were provided to health departments as data quality feedback reports on a quarterly basis to coincide with required quarterly line-listed data submissions. The feedback focused on data integrity, timeliness, completeness of the variables of interest, and the number of records. Information provided included reasons any file failed validation and, for files that passed validation, the number of records that were new, updated, or repeated; a history of each file submitted that included the number of records, the number of invalid records, and the number of cumulative records submitted; a listing of duplicate form identification numbers and related record-identifying variables; a listing of any missing system-required variables; the percentage of missing and invalid values for each variable of interest; record-identifying detail for each record with missing or invalid values; and the number of records and number of confirmed HIV-positive testing events for each quarter of the prior 3 years and a space for health department staff to confirm each number. A file-by-file history of data submissions from each health department is also provided in a report.

CDC scheduled conference calls with each health department to discuss the reports; we refer to these meetings as feedback calls. Feedback call attendees always included CDC and health department staff responsible for quality assurance and often included other evaluation and program staff members. In contrast to ad hoc technical assistance calls that health departments or CDC may initiate, these feedback calls were initiated by CDC, routine, systematic, standardized across all health departments, more comprehensive, and attended by more multidisciplinary staff members. During the calls, information in the reports was reviewed for accuracy; reasons for poor quality were discussed; and related data management and quality assurance questions were addressed. Written summaries of calls, including the need for any follow-up, were drafted by CDC, circulated for comment by call attendees, finalized, and then shared with all attendees. CDC emphasized the importance of making the data as accurate and complete as possible in future data submissions. However, if health departments had the time, resources, and interest, they were encouraged to conduct quality assurance on previously submitted data and, if needed, receive ad hoc technical assistance from CDC.

Definitions

Data completeness was assessed for each variable of interest by determining the percentage of data values that was neither missing nor invalid. For this analysis, we analyzed only newly submitted records and consider low completeness to be less than 85%.16

Health departments were required to submit data quarterly each year on February 15, May 15, August 15, and November 15, although data were accepted by CDC at any time. For this analysis, each data submission period is a 3-month period that begins on the 15th of the month (ie, February 15 to May 14, May 15 to August 14, August 15 to November 14, and November 15 to February 14).

Inclusion criteria

NHM&E testing data for 11 data submission periods during August 2008 to May 2011 were included: 6 periods before feedback implementation, 4 afterward, and 1 period that had data before and after the month that feedback implementation started. Because a new HIV test form was introduced in January 2008 and only 6 health departments reported 8981 records during the first submission period (May 15, 2008–August 14, 2008), the first submission period for this analysis was August 15, 2008, to November 14, 2008. The last submission period was chosen because the subsequent submission period included data from a new reporting system, thereby precluding an assessment during a period of consistent reporting. Data were included from health departments that submitted data to CDC before and after feedback implementation and used a data system that documented whether records were new or updated.

Variables of interest

Variables of interest were those most frequently used at CDC for reports and ad hoc data requests: number of new test records, race/ethnicity, HIV risk (for records of males and females with a current positive test result), prior HIV testing and results, current HIV test result, receipt of current HIV test results, linkage to HIV medical care (for all records with a current positive test result), referral to Partner Services (for all records with a current positive test result), and referral to HIV prevention services (for all records with a current positive test result).

Data analysis

Bivariate and multivariable analyses were conducted to assess associations between feedback call receipt and completeness for the variables of interest. Pearson χ2 tests of association were conducted to assess whether CDC funding for expanded HIV testing (started in October 2007 for 25 health departments5) affected the association between having at least 1 call and the number of records reported. Generalized estimating equations17 were used to control for effects of covariates, clustering by testing site, and repeated measures on testing sites (which accounts for all data reported by a health department both before and after feedback implementation) and to assess the effect of feedback in improving completeness of variables of interest. Separate generalized estimating equation analyses were conducted for variables of interest that did not have high completeness before and after feedback.

Each model included 6 covariates thought to be possibly associated with completeness of the variables of interest. Three health department–level covariates were expanded HIV testing funding (yes or no), US Census region of health department (ie, Northeast, Midwest, West, South, or other), and type of system used to submit data (automated system that scans data directly into a database or a nonscanning system). Three testing site–level covariates were HIV testing site type (health care, non–health care, correctional, and other), use of rapid HIV testing (yes or no), and type of testing (confidential or anonymous). In addition, to measure the effect of the calls as part of feedback, a covariate (yes or no) was added to indicate the submission immediately after a feedback call, because health departments received feedback calls at different times. Adjusted odds ratios (AORs) were estimated by exponentiating regression coefficients. SAS (version 9.3; Cary, North Carolina) was used for all statistical analyses. For variables with low completeness, summaries of conference calls were reviewed for contextual information.

Human participant compliance statement

A review by an independent ethics committee was not needed for this activity, which is a public health program activity and not human subjects’ research.

Results

Data were included from 44 health departments* of the 59 health departments funded by CDC. Data from 8 health departments, which conducted a range of 17 700 to 409 464 tests in 2010,6 were excluded because they did not submit records both before and after feedback implementation. Data from an additional 7 health departments, which conducted a range of 2107 to 10 009 tests in 2010,6 were excluded because they used a data system that could not distinguish between new and updated records. The total number of new HIV testing records submitted during the 11 data submission periods by the remaining health departments was 3 930 212. The average number of new records submitted per submission period increased from 197 907 during the 6 submission periods before feedback implementation to 497 753 during the 4 submission periods after feedback implementation (Table 1).

TABLE 1.

Number of New HIV Testing Records Submitted by Submission Period, 44 Health Departments, 2008–2011

Submission Period Health Departments, n New Records, n
1 (Aug 15, 2008–Nov 14, 2008) 14 78 777
2 (Nov 15, 2008–Feb 14, 2009) 17 159 977
3 (Feb 15, 2009–May 14, 2009) 26 238 210
4 (May 15, 2009–Aug 14, 2009) 29 291 127
5 (Aug 15, 2009–Nov 14, 2009) 23 78 305
6 (Nov 15, 2009–Feb 14, 2010) 22 341 048
7a (Feb 15, 2010–May 14, 2010) 37 751 754
8 (May 15, 2010–Aug 14, 2010) 31 373 490
9 (Aug 15, 2010–Nov 14, 2010) 32 224 414
10 (Nov 15, 2010–Feb 14, 2011) 40 718 787
11 (Feb 15, 2011–May 14, 2011) 32 674 323
a

Feedback began during this submission period.

All 44 health departments had at least 1 feedback call. During the 4 submission periods that included feedback calls from May 2010 to May 2011, the number and percentage of new records submitted from the 8 health departments with 1 feedback call decreased and the number and percentage of new records submitted from the 36 health departments with at least 2 feedback calls increased (P < .001) (Table 2). This pattern persisted when comparing health departments having at least 3 feedback calls with health departments having 1 or 2 calls (data not shown). In addition, the effect of at least 1 feedback call on the number of records was statistically significant when stratified by expanded HIV testing funding status (P < .001; data not shown).

TABLE 2.

Number of New HIV Testing Records Submitted by Submission Period and Number of Feedback Calls, 44 Health Departments, 2010–2011

Submission Period One Feedback Call (N = 8 health departments), n (%) At Least 2 Feedback Calls (N = 36 health departments), n (%)
8 (May 15, 2010–Aug 14, 2010) 111 934 (30) 261 556 (70)
9 (Aug 15, 2010–Nov 14, 2010) 61 536 (27) 162 878 (73)
10 (Nov 15, 2010–Feb 14, 2011) 276 851 (39) 441 936 (61)
11 (Feb 15, 2011–May 14, 2011) 3 879 (1) 670 444 (99)
Total 454 200 (23) 1 536 814 (77)

Comparing completeness before and after feedback implementation, for race/ethnicity (99.3% vs 99.3%), current test results (99.1% vs 99.7%), prior testing and results (97.4% vs 97.7%), and receipt of results (91.4% vs 91.2%), percentages of completeness were high before feedback and remained high afterward. Completeness for HIV risk (83.6% vs 89.5%) was low before feedback implementation and was higher afterward. Completeness also improved with feedback implementation but remained low for linkage to HIV care (56.0% vs 64.0%), referral to partner services (58.9% vs 62.8%), and referral to HIV prevention services (55.3% vs 63.9%).

The following covariates, based on statistical significance (P < .05), contributed to increased completeness for the 4 variables that did not have high completeness before and after feedback: reporting from the Midwest or West and using a scanning system. For variables with improved variable completeness, AORs show that feedback calls as part of feedback were associated with improved completeness: HIV risk (2.28; 95% confidence interval [CI], 1.75–2.96), linkage to care (1.60; 95% CI, 1.31–1.96), referral to partner services (1.73; 95% CI, 1.43–2.09), and referral to prevention services (1.74; 95% CI, 1.43–2.10) (Table 3).

TABLE 3.

Association of Feedback That Includes Calls With Completeness of Variables of Interest, Adjusted for Covariates, 44 Health Departments, 2008–2011

Outcome Adjusted Odds Ratio (95% Confidence Interval) P, χ2
HIV risk 2.28 (1.75–2.96) <.001
Linkage to HIV medical care 1.60 (1.31–1.96) <.001
Referral to partner services 1.73 (1.43–2.09) <.001
Referral to HIV prevention services 1.74 (1.43–2.10) <.001

Reasons for poor data quality provided in the feedback call summaries were coded for the 3 linkage and referral variables. On the basis of 111 feedback call summaries available of 158 (70%) potential feedback calls, the most frequently cited reasons for low completeness of the 3 linkage and referral variables were staff responsible for completing the form need training or related follow-up (10 health departments), data were in different databases or systems (6 health departments), data were not collected because of local policies or standard practices (5 health departments), staffing shortages (3 health departments), and reporting lag, that is, data submitted before knowing whether someone was linked to care (3 health departments).

Discussion

This analysis focused on the effect of data quality feedback on CDC NHM&E testing data. High-quality data for the variables of interest are particularly important for use at the national and local levels. All of the variables that we analyzed are important for planning and evaluating HIV prevention interventions and policies. Race/ethnicity is a frequently used variable to describe disparities; HIV risk is used to describe high-risk populations; receipt of results and linkage to care are critical parts of the HIV continuum of care; referrals to HIV services are critical to help find and treat other infected persons and help control the spread of HIV transmission in a community.

Feedback contributed to an increase in the number of new records submitted to CDC and improved completeness of all variables of interest that were not already at a high level of completeness. However, completeness remained low for 3 variables, one of which is critical to monitoring the HIV continuum of care and meeting NHAS and HIP goals. The results of the multivariable analysis, and the information from the feedback call summaries, suggest that both health department– and testing site–level factors contribute to variable completeness. More attention to these factors by CDC and health departments could further improve data quality, realizing that some challenges can be readily addressed by the health department (eg, need for more staff training or a policy change) compared with other challenges that an HIV program may have less control over (eg, laboratory reporting lags or data being in different databases).

Before feedback implementation, 4 variables of interest were already at high completeness, which suggests that data feedback was not needed for these variables. However, all variables of interest should be monitored so as to know their levels of completeness and whether there are changes over time. Furthermore, knowing that certain variables are persistently at high completeness highlights an opportunity to focus on variables at low completeness. Data quality feedback helped improve the 4 other variables of interest, suggesting that improvement was relatively easy and thus worthwhile.

Of the 3 variables with persistent low completeness, we focus on linkage to care, given its importance as a component of the HIV continuum of care and a desired outcome for persons identified as newly diagnosed with HIV through partner services. NHM&E data show that the percentage of diagnosed persons linked to care (69%75%)6,18,19 is similar to findings from other studies and reports (69%–80%),2023 which suggests that the NHM&E data provide an accurate estimate of linkage to care at a national level. However, 64% completeness for this variable of interest is disconcerting, especially considering that 23 health departments had 50% or less completeness in 2010 (unpublished data from database used for CDC6). If 23 health departments with test-level data truly have 50% or less completeness for linkage to care, then their data are unlikely to be reliable for assessing adequacy of linkage to care and monitoring NHAS and HIP goals at the local level. Is it possible that the health departments do have the data locally and are ensuring that clients’ needs and NHAS and HIP goals are being met but are experiencing challenges to submit complete data to CDC? Knowing that 25 health departments were not able to successfully submit data representing all of their tests to CDC in test-level format6 suggests that this is possible. Fully understanding this challenge is beyond the purpose of this analysis and interpretation of results; however, CDC does address this matter with health departments on an ongoing basis.

In addition to higher-quality data as described, there were other benefits to the data quality assessment and feedback system that includes calls. For example, feedback often led to CDC-provided technical assistance, health department staff providing additional technical assistance and training to other local staff, and programmatic changes (eg, a health department hiring or reassigning staff to focus more on persons being linked to care and data quality). This provided technical assistance is particularly important because the need for training was the most frequent reason for low completeness and because health care and other HIV testing providers are an important component to the HIV continuum of care. Providers involved with collecting and reporting data receive local training initially and may also benefit from technical assistance through the calls or the CDC NHM&E Service Center, which handles requests for technical assistance. In addition, calls provided a better opportunity to understand challenges that programs have and previously submitted records (not part of this analysis) also had improved data quality.

Two major developments related to feedback have since been made. First, CDC and health departments decided that the number of data submissions required of the health departments would decrease from 4 to 2 data submissions per year, starting in the latter half of 2011. Main reasons for this change included the time and resource burden to both health departments and CDC. As a result, the burden on staff at the national and local levels was decreased. Second, this general approach to feedback is now used for all CDC HIV testing and non-HIV testing program data submitted by health departments and community-based organizations. In addition, the effect of feedback on improved completeness of the HIV testing data appears to have persisted. For example, using data from 2013, completeness has increased for linkage to care (from 64% to 76%), referral to partner services (from 63% to 81%), and referral to HIV prevention services (from 64% to 72%).24

We believe that this feedback system is applicable to settings other than HIV public health. For public health and other professionals who are considering a similar feedback system, additional practical matters should be considered. Piloting the system before formal implementation is important to obtain the understanding and support from all persons who will be involved or affected and also to make final adjustments. For all persons involved, training before feedback implementation and as needed afterward is important. Documentation and easy access to stored reports are important, because prior reports are often needed to help resolve or better understand a problem.

This analysis has several limitations. First, 8 health departments were excluded from the analysis because they did not submit new records before and after feedback implementation, so the results may not represent all health departments by not including 15% (8/52) of the eligible health departments. Second, the results and data quality could be influenced by reasons not specifically addressed by our methods, such as the relative effect between feedback calls and reports, the difference in the level of technical assistance at the state or local levels for either an information technology or programmatic reason, or the health departments improving their reporting with the experience of additional submissions. Third, the assessment of record reporting was simple. With available data at CDC, it was not possible to conduct a more accurate assessment of record reporting completeness. For example, a comparison of the line-listed and aggregate data at CDC was not pursued because these data represent different reporting periods. A more thorough assessment could, however, include an analysis of records available locally. Fourth, misclassification bias could occur with the coding of reasons for poor data quality. Fifth, the feedback call summaries may underreport challenges the health departments have, because summaries were available for only 70% (111/158) of all data submissions, and calls focused on the quality of data and resolving data-specific problems.

Practice Implications

Data feedback improves data quality, provides several additional benefits, and is an important public health activity, particularly with the release of new federal indicators in support of data quality and NHAS,25 the establishment by President Obama of the HIV Care Continuum Initiative to accelerate improvements in HIV prevention and care,26 and the recent release of an updated NHAS, which now has an indicator for linking 85% of newly diagnosed persons to HIV care within 1 month of diagnosis.27 Our findings show that health departments have the capacity to keep data quality high for variables already at a high level of completeness and improve the quality of variables at lower levels of completeness. With new metrics and indicators, health departments will need to make changes to their systems for data collection, management, reporting, and quality assurance. Furthermore, work needs to be done at the local level to ensure that 85% of newly diagnosed persons will be linked to HIV care by 2020. Health departments and CDC should continue monitoring the data and devise and implement measures to improve data quality, including the use of calls and use the data for program planning, decision making, policies, and monitoring of NHAS and HIP goals at the national and local levels. Routine feedback provides a mechanism for improving data quality and enhancing communications with grantees about challenges. In this era of NHAS and HIP, decreased available funding, and increased expectations for accountability, quality improvement activities should be considered in an evidence-based manner and account for national and local resources, staff time, cost, outcomes, and public health impact.

Footnotes

*

Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Minnesota, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, New York, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, Puerto Rico, Chicago, District of Columbia, Houston, Los Angeles County, New York City, and Philadelphia.

The authors appreciate the work and dedication of all staff members who work on HIV program data quality assurance and improvement.

The findings and conclusions in this article are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.

None of the authors have any conflicts or possible conflicts of interest, and no funding was given for the work of the authors.

References

  • 1.The White House Office of National AIDS Policy. [Accessed July 2, 2014];National HIV/AIDS Strategy for the United States. http://www.whitehouse.gov/sites/default/files/uploads/NHAS.pdf. Published July 2010.
  • 2.Centers for Disease Control and Prevention. High-Impact HIV Prevention: CDC’s Approach to Reducing HIV Infections in the United States. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention; 2011. [Accessed July 2, 2014]. http://www.cdc.gov/hiv/pdf/policies_NHPC_Booklet.pdf. [Google Scholar]
  • 3.Centers for Disease Control and Prevention. Vital Signs: HIV prevention through care and treatment—United States. MMWR Morb Mortal Wkly Rep. 2011;60:1618–1623. [PubMed] [Google Scholar]
  • 4.Gardner EM, McLees MP, Steiner JF, del Rio C, Burman WJ. The spectrum of engagement in HIV care and its relevance to test-and-treat strategies for prevention of HIV infection. Clin Infect Dis. 2011;52:793–800. doi: 10.1093/cid/ciq243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Centers for Disease Control and Prevention. Results of the expanded HIV testing initiative—25 jurisdictions, United States, 2007–2010. MMWR Morb Mortal Wkly Rep. 2011;60:805–810. [PubMed] [Google Scholar]
  • 6.Centers for Disease Control and Prevention. HIV Testing at CDC-Funded Sites, United States, Puerto Rico, and the US Virgin Islands, 2010. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention; 2012. [Accessed July 2, 2014]. http://www.cdc.gov/hiv/resources/reports/pdf/PEB_2010_HIV_Testing_Report.pdf. [Google Scholar]
  • 7.USAID. Performance Monitoring & Evaluation TIPS: Data Quality Standards. 12. Washington, DC: United States Agency for International Development; 2009. [Accessed July 7, 2014]. http://www.seachangecop.org/sites/default/files/documents/2009%20USAID%20-%20TIPS%2012_Data%20Quality%20Standards.pdf. [Google Scholar]
  • 8.Dilley JA, Bekemeier B, Harris JR. Quality improvement interventions in public health systems: a systematic review. Am J Prev Med. 2012;42(5S1):S58–S71. doi: 10.1016/j.amepre.2012.01.022. [DOI] [PubMed] [Google Scholar]
  • 9.Riley WJ, Beitsch LM, Parsons HM, Moran JW. Quality improvement in public health: where are we now? J Public Health Manag Pract. 2010;16(1):1–2. doi: 10.1097/PHH.0b013e3181c2c7cc. [DOI] [PubMed] [Google Scholar]
  • 10.Mphatswe W, Mate KS, Bennett B, et al. Improving public health information: a data quality intervention in KwaZulu-Natal, South Africa. Bull World Health Organ. 2012;90:176–182. doi: 10.2471/BLT.11.092759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Aldridge ML, Kramer KD, Aldridge A, Goldstein AO. Improving data quality in large-scale, performance-based program evaluations. Am J Eval. 2009;30:426–436. [Google Scholar]
  • 12.Otwombe KN, Wanyungu J, Nduku K, Taegtmeyer M. Improving national data collection systems from voluntary counselling and testing centres in Kenya. Bull World Health Organ. 2007;85:315–318. doi: 10.2471/BLT.06.033712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Brouwer HJ, Bindels PJE, Van Weert HC. Data quality improvement in general practice. Fam Pract. 2006;23:529–536. doi: 10.1093/fampra/cml040. [DOI] [PubMed] [Google Scholar]
  • 14.Das M. Beyond measuring what matters to managing what matters: improving public health quality and accountability in the U.S. HIV epidemic response. Public Health Rep. 2013;128:360–363. doi: 10.1177/003335491312800505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Frieden TR. Six components necessary for effective public health program implementation. Am J Public Health. 2014;104:17–22. doi: 10.2105/AJPH.2013.301608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hall HI, Mokotoff ED. Setting standards and an evaluation framework for human immunodeficiency virus/acquired immunodeficiency syndrome surveillance. J Public Health Manage Pract. 2007;13:519–523. doi: 10.1097/01.PHH.0000285206.79082.cd. [DOI] [PubMed] [Google Scholar]
  • 17.Hanley JA, Negassa A, Edwardes MD, Forrester JE. Statistical analysis of correlated data using generalized estimating equations: an orientation. Am J Epidemiol. 2003;157:364–375. doi: 10.1093/aje/kwf215. [DOI] [PubMed] [Google Scholar]
  • 18.Centers for Disease Control and Prevention. HIV Testing at CDC-Funded Sites, United States, Puerto Rico, and the US Virgin Islands, 2008–2009. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention; 2011. [Accessed July 2, 2014]. http://www.cdc.gov/hiv/pdf/testing_cdc_sites_2008-2009.pdf. [Google Scholar]
  • 19.Centers for Disease Control and Prevention. HIV Testing at CDC-Funded Sites, United States, Puerto Rico, and the US Virgin Islands, 2011. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention; 2013. [Accessed July 7, 2014]. http://www.cdc.gov/hiv/pdf/HIV_testing_report_2011_12.13.13_Version3.pdf. [Google Scholar]
  • 20.Craw JA, Gardner LI, Marks G, et al. Brief strengths-based case management promotes entry into HIV medical care. J Acquir Immune Defic Syndr. 2008;47:597–606. doi: 10.1097/QAI.0b013e3181684c51. [DOI] [PubMed] [Google Scholar]
  • 21.Marks G, Gardner LI, Craw J, Crepaz N. Entry and retention in medical care among HIV-diagnosed persons: a meta-analysis. AIDS. 2010;24:2665–2678. doi: 10.1097/QAD.0b013e32833f4b1b. [DOI] [PubMed] [Google Scholar]
  • 22.Centers for Disease Control and Prevention. Monitoring selected national HIV prevention and care objectives by using HIV surveillance data—United States and 6 U.S. dependent areas—2010. [Accessed July 7, 2014];HIV Surveill Suppl Rep. 2013 18(2 pt B) http://www.cdc.gov/hiv/pdf/statistics_2010_HIV_Surveillance_Report_vol_18_no_2.pdf. [Google Scholar]
  • 23.Gray KM, Cohen SM, Hu X, Li J, Mermin J, Hall HI. Jurisdiction level differences in HIV diagnosis, retention in care, and viral suppression in the United States. J Acquir Immune Defic Syndr. 2014;65:129–132. doi: 10.1097/QAI.0000000000000028. [DOI] [PubMed] [Google Scholar]
  • 24.Centers for Disease Control and Prevention. CDC-Funded HIV Testing: United States, Puerto Rico, and the US Virgin Islands, 2013. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention; 2015. [Accessed September 17, 2015]. http://www.cdc.gov/hiv/pdf/library/reports/cdc-hiv-CDCFunded_HIV_Testing_UnitedStates_Puerto_Rico_USVI_2013.pdf. [Google Scholar]
  • 25.Valdiserri RO, Forsyth AD, Yakovchenko V, Koh HK. Measuring what matters: development of standard HIV core indicators across the U.S. Department of Health and Human Services. Public Health Rep. 2013;128:354–359. doi: 10.1177/003335491312800504. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.The White House Office of National AIDS Policy. [Accessed July 7, 2014];National HIV/AIDS Strategy: improving outcomes, accelerating progress along the HIV care continuum. http://www.whitehouse.gov/sites/default/files/onap_nhas_improving_outcomes_dec_2013.pdf. Published December 2013.
  • 27.The White House Office of National AIDS Policy. [Accessed August 28, 2015];National HIV/AIDS Strategy for the United States: updated to 2020. https://www.whitehouse.gov/sites/default/files/docs/national_hiv_aids_strategy_update_2020.pdf. Published 2015.

RESOURCES