Abstract
Objective
To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys.
Data Sources
All available validation studies.
Study Design
Compare results from existing research to understand variation in reporting across surveys.
Data Collection Methods
Synthesize all available studies validating survey reports of Medicaid coverage.
Principal Findings
Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate.
Conclusions
Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting.
Keywords: Medicaid undercount, validation study, survey and administrative data, uninsurance, health insurance coverage
Monitoring the success of the Patient Protection and Affordable Care Act (PPACA) requires valid estimates of health insurance coverage. As stated by Czajka and Lewis more than a decade ago: “Until we can make progress in separating the measurement error from the reality of uninsurance, our policy solutions will continue to be inefficient, and our ability to measure our successes will continue to be limited” (Czajka and Lewis 1999). In particular, the persistent presence of the Medicaid undercount—that is, that survey-based estimates of Medicaid enrollment are considerably below readily available counts from administrative data—calls into question our ability to accurately measure coverage and track reform efforts.
Surveys provide the only source of estimates for the count or proportion of the population that have various forms of health insurance (public and private) or lack insurance altogether. To the extent that Medicaid enrollees are counted as uninsured, estimates of uninsurance will be biased upward; to the extent that Medicaid enrollees are counted as having other coverage, estimates of other insurance coverage will be biased but uninsurance estimates will be unaffected. Therefore, misreporting Medicaid coverage has implications for the accuracy of estimates of other types of health insurance and uninsurance.
This article summarizes what is known about the accuracy of reports of Medicaid enrollment, describing the implications of misreporting for estimates of different types of insurance and people lacking insurance altogether. We have three major findings. First, measurement error in the most commonly used data source—the Current Population Survey—is large, apparently due to the full year reference period. Second, for other surveys with a point-in-time reference period, although measurement error remains a concern, confidence in estimates of uninsurance can be reasonably high as survey overstatements of uninsurance are modest. Third, confidence in estimates of the type of coverage should be lower because people known to have public coverage are more likely to have their type of coverage misclassified than to be reported as having no coverage.
Methods
We summarize the results from existing research documenting the accuracy of respondent reports of comprehensive public insurance coverage (e.g., Medicaid, SCHIP). Specifically, we draw on recent working papers and peer review publications identified through an electronic search of the literature (key words: Medicaid Undercount). We exclude studies of partial or limited benefits coverage (e.g., emergency medical assistance, family planning) and studies that do not use comparable survey measures of health insurance coverage (e.g., surveys that do not allow respondents to report multiple types of coverage are excluded).
Extant studies represent two validation designs referred to as experimental and matching.
The experimental studies follow a three-step process: (1) use administrative data to generate a random sample of public insurance enrollees; (2) survey that sample to learn how they characterize their coverage; and (3) compare the respondent's coverage reports to his or her known status from the administrative data. This design can identify false negatives (i.e., those who have public insurance but do not report it), but not false positives (i.e., those who do not have public insurance but report that they do). These studies may suffer from composition bias as they only include known enrollees who are surveyed (i.e., located and consent to be interviewed). All of the experimental studies were conducted in conjunction with statewide general population surveys.
The matching studies proceed the other way, also in a three-step process: (1) begin with an existing survey; (2) search for the surveyed individuals in corresponding administrative data; and (3) compare the administrative data status to the reports of coverage in the survey data in the same time period. Matching studies can identify both false negatives and false positives. False negatives are important as these people would be incorrectly classified as uninsured in surveys (upward bias). Such false negatives are likely to be at least partially offset by false positives, people lacking insurance coverage who are reported as having insurance (downward bias). These studies, conducted using federal data sources, potentially suffer from matching problems; that is, some survey or administrative records lack the identifying information needed to find a probable match in the other data source.
The experimental studies summarized here (Blumberg and Cynamon 1999; Goidel et al. 2007; Call et al. 2008a, b; Davern et al. 2008; Eberly, Pohl, and Davis 2009) are published in refereed journals. Some of the matching studies are published in refereed journals (Davern et al. 2009a, b; Klerman et al. 2009), and some are reported in working papers (Klerman et al. 2005; State Health Access Data Assistance Center et al. 2008, 2009, 2010). All the studies are based on samples of the civilian noninstitutionalized population. An overview of the data and methods for the experimental and federal matching studies is provided.
Experimental Studies
Experimental studies typically draw a sample of known public program enrollees from administrative records who are then administered the same instrument at the same time as the corresponding statewide general population survey. Given the lapse in time between when the sample is drawn and when the survey is administered, public program enrollment at the time of the survey is rechecked in the administrative records. This methodology allows researchers to observe error in survey reports of public insurance coverage among an enrolled population.
Most of the experimental surveys were conducted between 2003 and 2005. The population of interest varies: one study of children in Minnesota; two studies of adults (California and Minnesota); one of children and nonelderly adults; and three studies of program enrollees of all ages (Louisiana, Maryland, and Pennsylvania). Experimental study sample sizes vary (ranging from 1,087 in Florida to 4,314 for adults in Minnesota) as do response rates (29.8 percent in Florida to 61.5 percent for adults in Minnesota).
Federal Matching Studies
Matching studies link survey data to corresponding administrative data. The linking process begins with states' submissions of monthly Medicaid enrollment data to CMS. These data are cleaned and compiled into the Medicaid Statistical Information System (MSIS), which are in turn compiled to present an annual picture of enrollment (MAX data). Only some files have been linked as part of the SNACC project.1 Here, we report results from matched data for the 2001 Current Population Survey's Annual Social and Economic Supplement (CPS), the 2001 National Health Interview Survey (NHIS), and the 2003 Medical Expenditure Panel Survey's—Household Component (MEPS-HC shortened here to MEPS).
For several reasons, this data linkage is not straightforward. First, the universes may differ. The institutional population and the homeless appear in many administrative data sources, but not in most survey sampling frames. Sample misalignment occurs if the survey is fielded after the reference period (e.g., the CPS interviews in February through April about the previous calendar year), and it can also be induced by births, deaths, entrances, and exits from the military and institutions, and migration into and out of the United States.
Second, one needs to carefully define insurance. Some states have large contraception-only Medicaid programs which should not be viewed as, and are probably not reported in surveys as, health insurance coverage. These records should not be included in the count of people with Medicaid. Failure to delete such records from administrative counts will yield a spuriously high count of the insured and of the false-negative rates (i.e., those who have Medicaid, but do not so report).
Third, linking is not possible unless both the survey record and the administrative record for the same person have linking identifiers (Social Security Numbers [SSN]). Many survey records lack identifiers: 26.0, 47.7, and 39.1 percent of the CPS, NHIS, and MEPS records, respectively (State Health Access Data Assistance Center et al. 2008, 2009, 2010). Those without identifiers and those who did not give permission to link (by far the larger share) were treated as though the identifiers were missing conditionally at random. The cases with identifiers were reweighted based on observable characteristics (age, poverty, health insurance status, imputation status for health insurance) to align with control totals for the full file that included the records with missing identifiers (for details see State Health Access Data Assistance Center et al. 2008).2
The MSIS and MAX data also have missing linking identifiers. Missing data rates are lower, but far from zero: 10.0, 10.9, and 13.1 percent, respectively, of the MSIS/MAX are unavailable for linking to potential matches in the validated CPS, NHIS, and MEPS records (State Health Access Data Assistance Center et al. 2008, 2009, 2010). As a result, survey respondents who report Medicaid but no other form of coverage would be considered uninsured if their matching record in MSIS or MAX is missing its linking identifier. Under an assumption that the SSNs are missing (conditionally) at random, it is possible to compute the fraction of cases not linked because of missing MSIS identifiers. That information can then be used to adjust the CPS counts for this source of error (Klerman et al. 2009). In what follows, we use this methodology.
Health Insurance Coverage Questions
Respondent accuracy in reporting coverage may be tied to the way the health insurance questions are asked, data processing, and reference periods, all of which vary across surveys (Call, Davern, and Blewett 2007; Klerman et al. 2009; Pascale, Roemer, and Resnick 2009). The CPS, NHIS, MEPS, and state-specific surveys are similar: each includes a series of questions asking whether household members are covered by various sources of private and public health insurance. Respondents are allowed to say “yes” to multiple sources of insurance, and a verification question confirmed lack of coverage among those saying “no” to all insurance sources.
The coverage questions diverge along other dimensions. For example, the primary purpose of the CPS is to provide information about labor force participation and earnings in the previous calendar year. A complete calendar year is a natural reference period for labor force participation and earnings (e.g., the respondent can consult tax records). This reference period is less natural for health insurance coverage. Furthermore, the CPS questions ask about a variety of coverage sources in the previous calendar year using a household loop method (e.g., does anyone in the household have Medicaid), which is associated with measurement error (Hess et al. 2001). The MEPS is a panel study that interviews households five times over a 30-month period. The first round asks about coverage; future rounds clarify if and when this coverage changed during the preceding months. These survey responses can be aggregated across periods to compute reporting of coverage “ever in the year.” For this analysis, an MEPS respondent must report at least 1 month of Medicaid to be counted as covered. By contrast, the NHIS and state-specific surveys inquire about coverage at the time of the survey.
Analysis
Table 1 summarizes studies using experimental and matching methods. Specifically, we present the rate that Medicaid enrollees were correctly reported as having Medicaid (Column 1); the percent of Medicaid recipients who were not reported with Medicaid and instead have reports of some other type of public (Column 2) or private coverage (Column 3); a combined misreporting rate (Column 4); and the percent of the Medicaid population that is incorrectly estimated as uninsured based on survey misreports (Column 5). As missing entries suggest, not all these concepts can be computed for each study.
Table 1.
Studies and Target Population | Any Medicaid (%) | Otherwise Public (%) | Otherwise Private (%) | Otherwise Public or Private (%) | Uninsured (%) |
---|---|---|---|---|---|
Experimental studies | |||||
Children on Medicaid in MN 1999* | 79.5 | – | – | 16.0 | 4.5 |
Adults on Medicaid in BCBS in MN 2003† | 84.3 | – | – | 15.1 | 0.6 |
Adults on Medicaid (full benefit) in CA 2004‡ | 88.7 | 1.7 | 4.6 | 6.3 | 5.0 |
Nonelderly (< 65) persons on Medicaid in FL 2004‡ | 87.0 | 2.7 | 5.4 | 8.1 | 4.9 |
Persons on Medicaid in PA 2004‡ | 79.9 | 9.2 | 7.5 | 16.7 | 3.4 |
Persons on Medicaid in MD 2004§ | 87.5 | – | – | 8.1 | 4.4 |
Persons on Medicaid or LaCHIP in LA 2005¶ | 74.3 | 3.4 | 11.3 | 14.7 | 11.6 |
Matching studies | |||||
Adults (15–64) on Medicaid in CA (pooled 1990–2000 CPS data)‖ | 72.3 | – | – | 6.0 | 21.7 |
MSIS CPS match (CY2001)** | 57.1 | 8.4 | 17.2 | 25.6 | 17.4 |
MSIS NHIS match (CY2001)†† | 65.4 | 16.2 | 8.7 | 24.9 | 9.8 |
MSIS MEPS match (CY2003)‡‡ | 82.5 | – | – | 9.2 | 8.3 |
Note. All experimental studies compared “point-in-time” uninsurance self-reports with “point-in-time” Medicaid enrollment, with the exception of MD, which compared “uninsured all year” self-reports with Medicaid enrollment “at some point during the year.”
Blumberg and Cynamon (1999). Results from Study 1 only (MN) are included here.
Davern et al. (2008).
Call et al. (2008a).
Eberly, Pohl, and Davis (2009).
Goidel et al. (2007). Results from exact match group only are included here.
Klerman et al. (2005).
SNACC Phase III report (2008).
SNACC Phase IV report (2009).
SNACC Phase VI report (2010). Reporting on “ever enrolled” versus “round 1.”
Findings
There are four key findings. First, the experimental studies show higher levels of accurate reporting for persons known to have Medicaid coverage than do the matching studies. In the experimental studies, upwards of 74 percent correctly report Medicaid enrollment (Table 1).
Second, generally speaking, Medicaid recipients for whom Medicaid is not reported are more likely to have some form of coverage reported than a lack of insurance altogether. The only exception is in the matching study that pools 10 years of California CPS data, and that study did not exclude the large number of contraception-only Medicaid cases. Among those studies that show the nature of coverage misattributions (public vs. private coverage), the pattern of results is inconsistent. Three of the four experimental studies indicate that Medicaid enrollees are more likely to be characterized as having private than public coverage, whereas the matching studies are split in this regard.
Third, reporting coverage is more accurate in some surveys than others. The CPS stands out as being particularly inaccurate, apparently due to its long recall period (up to 16 months in the past). The CPS has a larger number of cases with insurance coverage reported as uninsured than any of the point-in-time surveys. Analysis of the linked data confined to the subsample of CPS respondents enrolled in Medicaid in the prior calendar and at the time of the survey shows more accurate Medicaid reporting as the respondent does not have to recall retrospective enrollment to correctly report Medicaid (Lynch 2008), but Klerman et al. (2009) show that treating the CPS as a point-in-time survey is incorrect as it only resembles one in the aggregate due to significant reporting errors.
Fourth, once the CPS studies are excluded, the results from experimental and matching studies are much more similar. The remaining differences are likely due to some form of selection bias in both types of studies. For example, in the matching studies, it could be that the cases not matched are different from those that are matched, thereby biasing the results. In the experimental studies, cases with good contact information who in turn respond to the survey,3 or who are enrolled long enough to be included in the analysis, may be different from those who are not, leading to bias.
Discussion and Conclusions
Accuracy of survey reports of Medicaid enrollment impact how these data should be used for health policy evaluations. Reporting in the experimental studies is between 74 and 89 percent accurate and between 57 (CPS) and 82 (MEPS) percent in the matching studies. This is considerably less accurate than reports of private insurance, for which matching (Hill 2007; Kreider and Hill 2009) and experimental studies (Davern et al. 2008) find between 95 and 99 percent accuracy.
Across all the studies, reporting of at least some type of insurance coverage (i.e., getting the simple distinction between being insured vs. uninsured correct) is better than reporting of Medicaid specifically. In the experimental studies between 88 and 99 percent report some form of coverage compared with approximately 90 percent in the matching studies (setting the CPS studies aside). This suggests that using survey data to disentangle Medicaid from other public program enrollees (e.g., SCHIP) is not recommended as there is a lot of confusion about the specific program in which the person is enrolled (Klerman et al. 2010 provide direct evidence for this conjecture). Consistent with Cantor et al. (2007), some Medicaid enrollees also have their coverage mistakenly reported as private; this is especially problematic in the CPS.
Consistent with Davidson (2005), the CPS and NHIS matching studies indicate potential downward bias in estimates of uninsurance due to people who report only public coverage but do not have a corresponding record in the MSIS files; this represents a potential overcount of Medicaid coverage (State Health Access Data Assistance Center et al. 2009). Likewise, Kreider and Hill (2009) found evidence of “overreporting” for private coverage in the linked MEPS-HC/IC data. Accounting for the offsetting influences of all forms of measurement bias is important when creating adjustments of coverage estimates.
These results are subject to several important qualifiers. First, the CPS and NHIS match ignore SCHIP enrollment (due to inconsistent MSIS data reporting across states). Yet in states with Medicaid expansions and/or where SCHIP and Medicaid have the same names, the state name filled in the survey applies to both programs. As such, a person with SCHIP who says “yes” to the Medicaid question would be correct in some sense but counted here as an error in these analyses as MSIS indicates SCHIP enrollment and not Medicaid (Klerman et al. 2011 make a partial correction for this problem). Any confusion between Medicaid and SCHIP will be counted as correct in the MEPS study, which asks about SCHIP and Medicaid in a single question.
Second, MEPS redesigned the Medicaid questions in 2004 to improve accuracy, and NHIS made adjustments in 2005 that improved Medicaid reporting (National Center for Health Statistics 2004). Our results using the 2003 MEPS and 2001 NHIS likely overstate the level of misreporting found in more recent administrations of the surveys.
Third, the analysis of the CPS, NHIS, and MEPS data in the matching studies was confined to explicit answers to the survey from respondents. All three surveys perform edits, using data from other survey questions, to account for reporting error when creating the final coverage estimates (National Center for Health Statistics 2003). These edits have the effect of increasing the proportion of enrollees with Medicaid, thereby reducing the measurement error associated with underreported Medicaid in all three national surveys.
This research indicates that estimates of uninsurance may be only modestly biased, but that estimates of the Medicaid enrolled population have substantially more bias (leading to the Medicaid undercount). This is good news because estimates of uninsurance and the eligible uninsured are arguably the most important indicators of reform success, and these are not impacted much by measurement error in surveys that use point-in-time measurement. Knowing the strengths and weaknesses of the various sources of survey estimates, why they differ, and how they can be improved is important so that they can be appropriately used in health policy research.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: This study was made possible by grant no. 052084 from the Robert Wood Johnson Foundation to the State Health Access Data Assistance Center (SHADAC; Michael Davern, PI) and with additional support supplied by the Office of the Assistant Secretary for Planning and Evaluation (ASPE), the Centers for Medicare and Medicare Services (CMS), the National Center for Health Statistics (NCHS), and the U.S. Census Bureau. This paper has undergone a limited review by all the participating organizations in accordance with existing agreements among these organizations. The views expressed are those of the authors and do not represent official positions of ASPE, NCHS, CMS, the U.S. Census Bureau, NORC, Abt Associates, Urban Institute, or SHADAC.
Disclosures: None.
Disclaimers: None.
Notes
The acronym SNACC represents the collaboration between the following entities: State Health Access Data Assistance Center (SHADAC), the National Center for Health Statistics (NCHS), the Agency for Healthcare Research and Quality (AHRQ), the U.S. Department of Health and Human Services Assistant Secretary for Planning and Evaluation (ASPE), the Centers for Medicare and Medicaid Services (CMS), and the U.S. Census Bureau.
The problem of missing survey identifiers appears to be improving. Prior to the 2006 CPS (covering the 2005 calendar year), the CPS interview asked the respondent for the SSN of every household member or for permission to look up the SSN, and the rate of SSN provision was dropping rapidly. Beginning with the 2006 interview, the CPS stopped asking for the SSN. Instead, Census simply looks up the SSN unless the respondent opts out of data linkage. The result was a sharp increase in the number of records with SSNs from about 80 percent to about 95 percent.
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
References
- Blumberg SJ, Cynamon ML. “Misreporting Medicaid Enrollment: Results of Three Studies Linking Telephone Surveys to State Administrative Records”. Hyattsville, MD: Department of Health and Human Services, Centers for Disease Control and Prevention; 1999. [accessed on June 22, 2012]. Seventh Conference on Health Survey Research Methods. DHHS Publication No. (PHS) 01-1013. Available at http://www.cdc.gov/nchs/data/misc/conf07.pdf. [Google Scholar]
- Call KT, Davern ME, Blewett LA. “Estimates of Health Insurance Coverage: Comparing State Surveys with the Current Population Survey”. Health Affairs. 2007;26(1):269–78. doi: 10.1377/hlthaff.26.1.269. [DOI] [PubMed] [Google Scholar]
- Call KT, Davidson G, Davern ME, Brown ER, Kincheloe J, Nelson JG. “Accuracy of Self-Reported Health Insurance Coverage among Medicaid Enrollees”. Inquiry. 2008a;45:438–56. doi: 10.5034/inquiryjrnl_45.04.438. [DOI] [PubMed] [Google Scholar]
- Call KT, Davidson G, Davern ME, Nyman R. “Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence”. Health Services Research. 2008b;43:901–14. doi: 10.1111/j.1475-6773.2007.00808.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cantor JC, Monheit AC, Brownlee S, Schneider C. “The Adequacy of Household Survey Data for Evaluating the Nongroup Health Insurance Market”. Health Services Research. 2007;42(4):1739–57. doi: 10.1111/j.1475-6773.2006.00662.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Czajka JL, Lewis K. “Using National Survey Data to Analyze Children's Health Insurance Coverage: An Assessment of Issues”. 1999. [accessed on August 3, 2011]. Report submitted by Mathematica Policy Research to the U.S. Department of Health and Human Services, May, 1999. Available at http://aspe.hhs.gov/health/reports/Survey%20data.htm.
- Davern ME, Call KT, Ziegenfuss J, Davidson G, Beebe T, Blewett LA. “Validating Health Insurance Coverage Survey Estimates: A Comparison between Self-Reported Coverage and Administrative Data Records”. Public Opinion Quarterly. 2008;72(2):241–59. [Google Scholar]
- Davern ME, Klerman JA, Baugh D, Call KT, Greenberg G. “An Examination of the Medicaid Undercount in the Current Population Survey (CPS): Preliminary Results from Record Linking”. Health Services Research. 2009a;44:965–87. doi: 10.1111/j.1475-6773.2008.00941.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davern ME, Klerman J, Ziegenfuss J, Lynch V, Greenberg G. “A Partially Corrected Estimate of Medicaid Enrollment and Uninsurance: Results from an Imputational Model Developed off Linked Survey and Administrative Data”. Journal of Economic and Social Measurement. 2009b;34(4):219–40. [Google Scholar]
- Davidson G. “Early Results from the Pennsylvania Medicaid Undercount Experiment”. 2005. Presentation at SHADAC's Meeting, Survey and Administrative Data Sources of the Medicaid Undercount, Washington DC, May 5, 2005.
- Eberly T, Pohl M, Davis S. “Undercounting Medicaid Enrollment in Maryland: Testing the Accuracy of the Current Population Survey”. Population Research and Policy Review. 2009;28(2):221–36. [Google Scholar]
- Goidel RK, Procopio S, Schwalm D, Terrell D. “Implications of the Medicaid Undercount in a High-Penetration Medicaid State”. Health Services Research. 2007;42(6):2424–41. doi: 10.1111/j.1475-6773.2007.00794.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hess J, Moore J, Pascale J, Rothges J, Keeley C. “The Effects of Person-Level versus Household-Level Questionnaire Design on Survey Estimates and Data Quality”. Public Opinion Quarterly. 2001;65(4):574–84. [Google Scholar]
- Hill SC. “The Accuracy of Reported Insurance Status in the MEPS”. Inquiry. 2007;44(Winter 2007/2008):443–68. doi: 10.5034/inquiryjrnl_44.4.443. [DOI] [PubMed] [Google Scholar]
- Klerman JA, Ringel JS, Roth B. “Under-Reporting of Medicaid and Welfare in the Current Population Survey”. 2005. [accessed on August 3, 2011]. RAND Labor and Population. WR-169-3. Available at http://www.rand.org/pubs/working_papers/2005/RAND_WR169-3sum.pdf.
- Klerman JA, Davern M, Lynch V, Ringel J. “Understanding the Current Population Survey's Insurance Estimates and the Medicaid Undercount”. Health Affairs. 2009;28(6):w991–1001. doi: 10.1377/hlthaff.28.6.w991. (web posting September 10, 2009); http://content.healthaffairs.org/cgi/content/abstract/hlthaff.28.6.w991) Klerman, J.A., M. Davern, and M. Plotzke 2011. “The CPS Medicaid Undercount and the Count of the Uninsured.” Presented at APPAM, Washington DC, November 2009. [DOI] [PubMed] [Google Scholar]
- Klerman JA, Plotzke M, Davern M, Call KT. “CHIP Reporting in the CPS-ASEC”. 2010. Presented at APPAM, Washington, DC, November 2009. [DOI] [PMC free article] [PubMed]
- Kreider B, Hill SC. “Partially Identifying Treatment Effects with an Application to Covering the Uninsured”. Journal of Human Resources. 2009;44(2):409–49. [Google Scholar]
- Lynch V. “Medicaid Enrollment: The Relationships between Survey Design, Enrollee Characteristics, and False-Negative Reporting”. 2008. [accessed on August 3, 2011]. American Statistical Association 2008 Proceedings of the Section on Survey Research Methods. Available at http://www.census.gov/did/www/snacc/publications/papers.html.
- National Center for Health Statistics. “2001 National Health Interview Survey (NHIS) Public Use Data Release Survey Description”. 2003. [accessed on August 3, 2011]. Division of Health Interview Statistics, National Center for Health Statistics, Hyattsville, MD, January, 2003. Available at http://ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Dataset_Documentation/NHIS/2001/srvydesc.pdf.
- National Center for Health Statistics. “2004 National Health Interview Survey (NHIS) Public Use Data Release Survey Description”. 2004. [accessed on August 3, 2011]. Division of Health Interview Statistics, National Center for Health Statistics, Hyattsville, MD, July, 2005. http://www.cdc.gov/nchs/data/nhis/srvydesc.pdf.
- Pascale J, Roemer MI, Resnick DM. “Medicaid Underreporting in the CPS: Results from a Record Check Study”. Public Opinion Quarterly. 2009;73(3):497–520. [Google Scholar]
- Plotzke M, Klerman JA, Davern M. “How Similar Are Different Sources of CHIP Enrollment Data?”. Journal of Economic and Social Measurement. 2011;36(3):213–225. [Google Scholar]
- State Health Access Data Assistance Center, Centers for Medicare and Medicaid Services, Department of Health and Human Services Assistant Secretary for Planning and Evaluation, National Center for Health Statistics, and U.S. Census Bureau. “Phase II Research Results: Examining Discrepancies between the National Medicaid Statistical Information System (MSIS) and the Current Population Survey (CPS) Annual Social and Economic Supplement (ASEC)”. 2008. [accessed on May 2, 2012]. Report of the Research Project to Understand the Medicaid Undercount. Washington, DC: U.S. Census Bureau. Available at http://www.census.govg/did/www/snacc/docs/SNACC_II_Full_Report.pdf.
- State Health Access Data Assistance Center, Centers for Medicare and Medicaid Services, Department of Health and Human Services Assistant Secretary for Planning and Evaluation, National Center for Health Statistics, and U.S. Census Bureau. “Phase IV Research Results: Estimating the Medicaid Undercount in the National Health Interview Survey (NHIS) and Comparing False-Negative Medicaid Reporting in NHIS to the Current Population Survey (CPS)”. 2009. [accessed on May 2, 2012]. Report of the Research Project to Understand the Medicaid Undercount. Washington, DC: U.S. Census Bureau. Available at http://www.census.govg/did/www/snacc/docs/SNACC_IV_Full_Report.pdf.
- State Health Access Data Assistance Center, Centers for Medicare and Medicaid Services, Department of Health and Human Services Assistant Secretary for Planning and Evaluation, National Center for Health Statistics, and U.S. Census Bureau. “Phase VI Research Results: Estimating the Medicaid Undercount in the Medical Expenditure Panel Survey Household Component (MEPS-HC)”. 2010. [accessed on May 2, 2012]. Report of the Research Project to Understand the Medicaid Undercount. Washington, DC: U.S. Census Bureau. Available at http://www.census.govg/did/www/snacc/docs/SNACC_VI_Full_Report.pdf.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.