Skip to main content
Health Services Research logoLink to Health Services Research
. 2008 Jun;43(3):901–914. doi: 10.1111/j.1475-6773.2007.00808.x

Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence

Kathleen Thiede Call, Gestur Davidson, Michael Davern, Rebecca Nyman
PMCID: PMC2442249  PMID: 18546545

Abstract

Objective

To examine whether known Medicaid enrollees misreport their health insurance coverage in surveys and the extent to which misreports of lack of coverage bias estimates of uninsurance.

Data Source

Primary survey data from the Medicaid Undercount Experiment.

Study Design

Analyze new data from surveys of Medicaid enrollees in California, Florida, and Pennsylvania and summarize existing research examining bias in coverage estimates due to misreports among Medicaid enrollees.

Data Collection Method

Subjects were randomly drawn from Medicaid administrative records and were surveyed by telephone.

Principal Findings and Conclusions

Cumulative evidence shows that a small percentage of Medicaid enrollees mistakenly report being uninsured, resulting in modest upward bias in estimates of uninsurance. A somewhat larger percentage of enrollees report having some other type of coverage than no coverage, biasing Medicaid enrollment estimates downward but not biasing estimates of uninsurance significantly upward. Implications for policy makers' confidence in survey estimates of coverage are discussed.

Keywords: Validation study, health insurance coverage, survey and administrative data, Medicaid undercount


There is consensus among researchers that population surveys of health insurance coverage undercount the number of individuals enrolled in Medicaid (Swartz and Purcell 1989; Holahan, Winterbottom, and Rajan 1995; Bennefield 1996; Dubay and Kenney 1996; Lewis, Elwood, and Czajka 1998; Blumberg and Cynamon 1999; Congressional Budget Office 2003). That is, estimates of the number of individuals with Medicaid coverage derived from survey data are consistently lower than the count of individuals enrolled in Medicaid obtained from administrative records. This is referred to as the “Medicaid undercount.”

The existence of a Medicaid undercount implies that Medicaid recipients do not report Medicaid coverage in surveys asking about health insurance coverage. Many assume that Medicaid enrollees either do not understand that they are enrolled, or they are embarrassed to report their enrollment (Klerman, Ringel, and Roth 2005). This misreport can take two forms: the person incorrectly reports having some other type of coverage or the person misreports being uninsured. Adjustments made to correct for the discrepancy between survey and administrative counts of Medicaid participation seem to assume a bit more of the later—that Medicaid enrollees fail to report any coverage (Callahan and Mays 2005; Urban Institute 2006).

Here, we directly test the assumption that Medicaid enrollees are inaccurate reporters of coverage and instead say they are uninsured in surveys. If Medicaid enrollees indicate that they lack any type of health insurance coverage, then estimates of uninsurance in surveys will be biased upward. If Medicaid enrollees indicate they have some type of coverage, then survey estimates of uninsurance are not biased from this form of measurement error. Understanding the magnitude and form of this measurement error is important. Surveys are the only source of information on those lacking coverage, providing the only means of assessing the extent to which programs are reaching their target populations, and survey estimates are widely used in health services research for policy development, evaluations, and simulations (Blewett et al. 2004).

In this paper, we examine new data concerning the extent to which Medicaid enrollees accurately or inaccurately report health insurance coverage. In addition, we summarize existing evidence from a variety of studies regarding the degree of misreporting. We then calculate the extent of upward bias introduced by Medicaid enrollees misreporting uninsurance in health insurance surveys. Finally, we discuss the implications of this evidence for confidence in the use of survey estimates of uninsurance for policy analysis.

DATA SOURCES AND METHODS

The Medicaid Undercount Experiment (MUE) conducted surveys of known Medicaid enrollees to observe the percent of Medicaid enrollees who incorrectly report their health insurance coverage. Surveys were undertaken in California (CA), Florida (FL), and Pennsylvania (PA). Survey instruments and administration (including the survey vendor and timing of implementation) closely replicated each state's general Random Digit Dial population surveys of health and health care coverage.

Study populations for the MUE were randomly drawn from state administrative records of noninstitutionalized Medicaid enrollees.1Table 1 describes each of the MUE samples and response rates. The CA MUE only included individuals 18 years and older and all responses were self-report. FL and PA were household surveys that included children, with the most knowledgeable adult providing proxy reports for children and other household adults. FL's general population survey targeted nonelderly households, excluding households containing only members over age 64 from both surveys. Therefore, respondents over age 64 were not representative of the elderly population generally and their data were excluded from the analysis.

Table 1.

Medicaid Undercount Experiment Sample Descriptions

State MUE MUE Sample Frame* (Survey Administration Period) Number of Completes (Response Rate) Sample Exclusions Analytic Sample Size
California Adults enrolled December 2003 (February–May 2004) 1,423 (41.7%) Eight group setting; 24 missing coverage data; nine missing key covariate data; 66 not enrolled at time of survey 1,316
Florida Adults and children enrolled August 2004 (September and November 2004) 1,087 (29.8%) One missing coverage data; five missing key covariate data; 60 not enrolled at time of survey; 81 nonrepresentative cases over the age of 65 940
Pennsylvania Adults and children enrolled April 2004 (June–September 2004) 1,540 (55.9%) Forty-three missing coverage data; one missing key covariate data; 104 not enrolled at time of survey 1,392
*

Sample frame excludes those in institutional settings (e.g., nursing homes, group quarters).

Reported response rates are based on an AAPOR RR4 calculation.

MUE, Medicaid Undercount Experiment.

The response rates vary from state to state and are perhaps lower than desired. However, recent studies indicate that the relationship between response rates and response bias is minimal in opinion and attitude polls generally (Keeter et al. 2000, 2006; Groves 2006), and in health surveys specifically (Triplett 2002; Blumberg et al. 2005; Davern et al. 2006b; Holle et al. 2006).

After the MUE surveys were completed, respondent identification information was matched against administrative data and our analyses of insurance coverage were conducted on only those sampled Medicaid enrollees actually enrolled at the time of the survey (see exclusions in the fourth column). Given our focus on self-reports of health insurance coverage, we also exclude cases in which the respondent was unable to provide coverage information or answer questions about factors associated with health insurance coverage such as health status. The MUE data were weighted to be representative of the enrollment population from which the samples were drawn. Analyses were performed using STATA statistical software, which adjusts standard errors to account for the complex survey design (StataCorp 2003).

Paralleling the general state surveys, all MUE surveys asked about respondents' health insurance coverage at the time of the survey. Each survey included a series of questions asking whether the target respondent (CA) and other members of the household (FL and PA) were currently covered by various sources of private and public health insurance. Respondents were allowed to say “yes” to multiple sources of insurance, and a verification question confirmed lack of coverage among those saying “no” to all insurance sources. The CA MUE instrument, modeled after the California Health Interview Survey (CHIS), was an omnibus health survey in which questions about insurance coverage come later in the survey (approximately eight sections into the instrument). The FL and PA surveys, whose primary focus was on health insurance coverage and access to health care, placed the coverage questions at the beginning of the survey.

ANALYSIS

Our analysis takes the results from new and existing research presenting self-reports of health insurance coverage among known enrolled populations and then calculates the impact of survey misreports of health insurance coverage among Medicaid recipients on bias in estimates of uninsurance. This synthesis is aimed at improving our understanding of the form and magnitude of bias in uninsurance estimates derived from the various studies and methodologies (experimental and matching studies).

Tables 2 and 3 are divided into sections summarizing experimental and matching studies. Column (2) in Table 2 contains the rate that Medicaid enrollees correctly report that they have Medicaid. Column (3) shows the percent of respondents who answer the survey as though they have some other type of health insurance coverage. And column (4) provides the percent of the Medicaid population that answers the survey as though they are uninsured. Table 3, the column marked (1) provides the insured population count represented in each study. For example, the CA MUE sample (Row 1) represented the 3,309,192 adults enrolled in California's Medicaid program (referred to as Medi-Cal) on average during the months the CA MUE survey was in the field. Column (2) presents the percent of respondents known to be enrolled in Medicaid who mistakenly report being without insurance (this is the same as Table 2, column [3]). Column (3) is the product of the first two columns, representing the upward bias in the count of uninsured people. For example, the count of uninsured CA adults in 2004 was approximately 344,487 too high due to 10.4 percent of adult Medi-Cal recipients reporting they had no insurance. Column (4) represents the size of the age-relevant population in the study year (e.g., based on the CPS, about 25.8 million adults lived in CA in 2004), and column (5) provides the size of the total population in the study year (e.g., about 35.4 million people lived in CA in 2004). Column (6) displays the percentage point upward bias to age-relevant estimates of the percent uninsured (columns [3/4]) and column (7) shows the amount of upward bias to population estimates of uninsurance (column [3/5]) due to misreports of uninsurance among the insured.

Table 2.

Reports of Health Insurance Coverage in Experimental and Matching Studies

Studies and Target Population Percent of Medicaid Population Answering Correct Insurance Type Percent of Medicaid Population Answering Wrong Insurance Type Percent of Medicaid Population Answering They Are Uninsured
Experimental studies
 Adults on Medicaid in CA 2004 83.1 6.5 10.4
 Nonelderly (<65) persons on Medicaid in FL 2004 87.0 8.1 4.9
 Persons on Medicaid in PA 2004 79.9 16.7 3.4
 Children on Medicaid in MN 1999* 79.5 16.0 4.5
 Persons on Medicaid in MN 1999 54.0 41.9 4.1
 Adults on Medicaid in Blue Cross in MN 2003§ 84.3 15.1 0.6
 Persons on Medicaid in MD 2004 87.5 8.1 4.4
Matching study
 Adults (age 15–64) on Medicaid in CA (pooled 1990–2000 data) 72.3 6.0 21.7

Note: All experimental studies compared “point in time” uninsurance self-reports to “point in time” Medicaid enrollment, with the exception of MD, which compared “uninsured all year” self-reports to Medicaid enrollment “at some point during the year.”

*

Blumberg and Cynamon (1999). Note: Results from Study 1 only (MN) are included here.

Call et al. (2001). Note: This study did not allow for multiple types of insurance coverage; those responding “yes” to Medicare were not asked the Medicaid question, thereby potentially resulting in a lower level of accurate Medicaid reporting.

Table 3.

Experimental Estimates of Medicaid Persons Being Coded as Uninsured in Surveys and the Percentage Point Upward Bias in Uninsured Rate Compared with One Matching Study*

Studies and Target Population (1) Size of the Medicaid Population Studied (2) Percent of Medicaid Population Answering They Are Uninsured (3) Upward Bias in Number of Uninsured (Columns [1 × 2]) (4) Size of the Total Relevant Population (Insured and Uninsured) (5) Size of the Total Population (Insured and Uninsured) (6) Percentage Point Upward Bias in the Uninsured Rate in Relevant Population (Columns [3/4]) (7) Percentage Point Upward Bias in the Uninsured Rate (Columns [3/5])
Experimental studies
 Adults on Medicaid in CA 2004 3,309,192 10.4 344,487 25,831,000 35,394,000 1.3 1.0
 Nonelderly (<65) persons on Medicaid in FL 2004 1,868,073 4.9 92,096 14,056,000 16,921,000 0.7 0.5
 Persons on Medicaid in PA 2004 1,559,687 3.4 53,497 12,155,000 12,155,000 0.4 0.4
 Children on Medicaid in MN 1999§ 215,469 4.5 9,696 1,415,000 4,833,000 0.7 0.2
 Persons on Medicaid in MN 1999 334,378 4.1 13,709 4,833,000 4,833,000 0.3 0.3
 Adults on Medicaid in Blue Cross in MN 2003 51,473 0.6 309 3,809,000 5,054,000 0.0 0.0
 Persons on Medicaid in MD 2004** 713,692 4.4 31,402 5,493,000 5,493,000 0.6 0.6
Matching study††
 Adults (age 15–64) on Medicaid in CA (pooled 1990–2000 data)‡‡ 21.7 1.0
*

All estimates are of the noninstitutionalized population.

Based on enrollment data.

Population totals are taken from the Current Population Survey for the corresponding year, state, and age group (Table HI05). Accessed July 25, 2007.

Note: All experimental studies compared “point in time” uninsurance self-report to “point in time” Medicaid enrollment, with the exception of MD, which used an “uninsured all year” self-report to a Medicaid at some point during the year comparison.

§

Blumberg and Cynamon (1999). Note: Results from Study 1 only (MN) are included here.

††

No population totals are provided because the study design pools multiple years of data.

FINDINGS

There are three key findings from the experimental studies. First, relatively few persons known to have Medicaid coverage incorrectly indicate that they are uninsured in surveys. As shown in Table 2, on average across the seven experiments, 4.6 percent of those with coverage report no insurance, ranging from a high of 10.4 percent among adults on Medicaid in CA in 2004 to a low of 0.6 percent of adults enrolled in Medicaid in Minnesota in 2003.2

Second, the experimental results suggest greater accuracy in reports of health insurance type than previously assumed. Among those who do not report Medicaid, for the most part, more report the wrong type of insurance coverage than report a lack of insurance altogether (the exception is the CA MUE). As shown in Table 2, the majority of enrollees accurately report Medicaid—79 percent on average across the studies. The study by Call et al. (2001) shows the lowest accuracy which can be attributed to a survey design that did not allow respondents to answer the full array of insurance type questions.3

The third major result, presented in Table 3, is that errors in reporting result in little bias to overall estimates of uninsurance (either age-relevant or total population rates). The upward bias to estimates of uninsurance in the experiments ranged from as high as 1.3 percentage points for CA adults in 2004, to as low as 0.0 percentage points among Minnesotans in 2003, for a simple average of 0.6 age-relevant and 0.4 total population percentage points across the seven experiments.

DISCUSSION

Tables 2 and 3 show substantial variation in the magnitude of misreporting uninsurance in experimental studies. Variation is expected given differences in the populations studied, sampling error, instrument design, and survey operations (Call, Davern, and Blewett 2007). For example, the FL and PA surveys ask respondents about health insurance coverage early in the instrument, which may improve reporting accuracy over the CA MUE that includes these questions later in the survey. Another viable explanation for the higher rate of misreports among Medi-Cal recipients is that it is an adult-only sample. Analyses by age in FL and PA (Call et al. 2006b) indicate that reports are less accurate among adult enrollees as compared with adults reporting for child enrollees. Finally, CA has a higher percentage of Medicaid enrollees receiving partial benefits such as emergency, family planning, or tuberculosis-related services (18.3 percent compared with 2.4 percent in FL and 9.3 percent in PA [Call et al. 2006b]). Partial benefit enrollees are less accurate reporters than those receiving comprehensive Medicaid benefits (Kincheloe et al. 2006).

In contrast to the other experiments that use a point-in-time measure of insurance coverage, the study by Eberly, Pohl, and Davis (2005) mimics the Current Population Survey's Annual Social and Economic Supplement (CPS-ASEC) questions about previous calendar year coverage, which is associated with higher measurement error (Sudman, Bradburn, and Schwarz 1996). Contrary to expectations, their results are comparable with the other experiments (aside from Call et al. 2001). The lower-than-anticipated rate of misreporting of uninsurance among Maryland Medicaid enrollees may be due to the addition of a question to the end of the CPS-ASEC insurance coverage series providing colloquial names for state-specific public programs, which significantly increased reports of coverage over the standard CPS-ASEC instrument (Eberly, Pohl, and Davis 2005).

How do the state experimental studies compare with the matching study? To start, the matching study4 by Klerman, Ringel, and Roth (2005)5 finds a much higher rate of reporting error (second section of Table 2); 21.7 percent of adults with Medi-Cal enrollment over these 11 years were inferred to have reported no insurance in the CPS-ASEC, which was more than twice that of the state experimental studies. This translates into an upward bias in the estimate of uninsurance of approximately one percentage point over this period, which is similar to the experimental studies (the use of 11 years of data precludes our ability to provide relevant population totals for these cells in Table 3).

CONCLUSIONS AND IMPLICATIONS

Some speculate that the Medicaid undercount results, in part, from people with Medicaid failing to report this coverage and instead reporting they are uninsured (Lewis, Elwood, and Czajka 1998; Klerman, Ringel, and Roth 2005). The experimental evidence shows that people on Medicaid are fairly accurate reporters of insurance coverage (upwards of 79 percent report Medicaid), and when they do fail to report Medicaid, for the most part, more report the wrong type of coverage than report being uninsured. Only a modest number of Medicaid enrollees erroneously report not having health insurance (an average of 4.6 percent across the seven experiments) resulting in similarly modest bias in estimates of uninsurance at a point in time (less than a one percentage point average increase in the rate of uninsurance).

The results of the experimental studies have several implications. First, they raise concerns about the reassignment of survey respondents from uninsured to Medicaid coverage in surveys. Two simulation models have been created to adjust the CPS-ASEC coverage estimates to match administrative counts of Medicaid, the ARC (Callahan and Mays 2005) and TRIM3 models (Urban Institute 2006). Both models draw a substantial proportion of cases reassigned to Medicaid from the ranks of the uninsured (54 percent for ARC as compared with 32 percent in the TRIM3 model [Czajka 2005]), and assume two to three times more bias in estimates of the uninsured than indicated in the experimental studies. However, both simulation models adjust data from the CPS-ASEC which implements an annual measure of health insurance coverage appearing toward the end of a long survey that is primarily concerned with labor force and program participation. Thus, the CPS-ASEC likely contains more measurement error than instruments that ask about point-in-time coverage as used by all but the Maryland experiment (Sudman, Bradburn, and Schwarz 1996). There are longstanding concerns that estimates of uninsurance from the CPS-ASEC are too high (primarily due to survey design issues) and uncertainty about what this uninsurance estimate truly represents (annual or point-in-time) (DeNavas-Walt, Proctor, and Lee 2005), yet adjusting these estimates to match all year Medicaid enrollment counts may not be appropriate because the estimates of other types of coverage (and the uninsured) do not seem to resemble all-year estimates. Of primary concern here is that states emulating this practice of adjusting survey estimates to be consistent with Medicaid enrollment data (Washington State Office of Financial Management 2003), may be introducing other unknown bias into coverage estimates. The Medicaid undercount is but one source a measurement bias in health insurance surveys.

Second, and most importantly, findings from the experimental studies should provide reassurance about the merits of using general population survey data to inform policy decisions, especially those that use point-in-time measures of health insurance coverage. The cumulative evidence indicates that respondents do a good job of self-reporting Medicaid coverage as well as whether they do or do not have insurance; therefore, harsh criticisms of survey estimates of uninsurance are unfounded (Hunter 2004; Joint Economic Committee 2004).

Several issues are worth pursuing in future research. The first is the question of comparability between survey and administrative data. Surveys are designed to answer questions about the distribution of attitudes, opinions and characteristics of populations. Administrative data are collected for program management and payment purposes, gathering information at enrollment and redetermination periods on family and individual characteristics and statuses that can be quite dynamic. Perhaps it is unrealistic to think that data collected for such disparate purposes would be directly comparable. Both data sources likely contribute to the discrepancy in counts of public health care program enrollees (Hoffman and Holahan 2005; Call et al. 2006a; Kincheloe et al. 2006), yet both sources of data are valuable and should be used as intended.

Future research should look to potential sources of upward bias in survey estimates of Medicaid coverage and downward bias in survey estimates of uninsurance. There is evidence that some people with commercial insurance misreport that they have Medicaid coverage (Davern et al. 2006a). Further, people living in households with Medicaid enrollees sometimes report having Medicaid when there is no indication they are enrolled and they self-report no other kind of coverage (Davidson 2005). There is also reason to believe that some uninsured persons report having private coverage in surveys (Kreider and Hill 2006). Based on the design of coverage questions alone—asking about as many as eight potential sources of coverage—an uninsured person also has many chances to mistakenly report coverage or feel pressured to offer this socially desirable response. Therefore, it is plausible that surveys may in part undercount the number uninsured.

In summary, although there is evidence that Medicaid enrollees mistakenly report being uninsured resulting in some upward bias in estimates of uninsurance, for the most part this bias is not large. The results point to the importance of improving survey measurement to help respondents correctly report the presence or absence of health insurance coverage and the source of coverage. Working toward the best estimate of the uninsured should help refocus the debate on the needs of the uninsured as opposed to the count of the uninsured. We conclude, however, that policy makers can take comfort that point-in-time survey estimates of uninsurance are usefully accurate.

Acknowledgments

This study was funded by a grant is from the Robert Wood Johnson Foundation's (RWJF) Changes in Health Care Financing and Organization (HCFO) Initiative. We thank John Holahan of the Urban Institute for his guidance on this project and Linda Bilheimer of the National Center for Health Statistics (formerly at RWJF) for encouraging our pursuit of this project. We appreciate the time and insights of all those who participated this study. In California, this includes E. Richard Brown, Wei Yen, and Jennifer Kincheloe at UCLA. In Florida we thank Paul Duncan, Allyson Hall, and Colleen Porter at the University of Florida. In Pennsylvania, this includes Brian Robertson and Patrick Madden of Market Decisions LLC; Ed Naugle and Patricia Stromberg at the Pennsylvania Insurance Department; William Columbus and Jerry Koerner at the Pennsylvania Department of Public Welfare.

NOTES

1

Details about the sampling design and matching process for verifying enrollment are available in Call et al. (2006b).

2

Consistent with the MUE methodology, in each experiment a respondent's enrollment at the time of the survey was confirmed because of the lag between the time samples were drawn and surveys were completed.

3

Those responding “yes” to the Medicare question were not asked about Medicaid coverage (the first and second question in the series, respectively); these two programs are easily confused in surveys (Pascale 2001).

4

Individual-level survey responses were matched with enrollment data using Social Security numbers (SSNs) for CPS-ASEC respondents between the ages of 15 and 64 for whom SSNs were available.

5

A similar matching study was conducted by Card, Hildreth and Shore-Sheppard (2004). Their analysis does not distinguish between those who report the wrong type of insurance and those who do not report any coverage and is therefore, excluded from our analysis.

REFERENCES

  1. Bennefield R L. Working Paper no. 218. Washington, DC: U.S. Census Bureau; 1996. [January 19, 2007]. “A Comparative Analysis of Health Insurance Coverage Estimates: Data from CPS and SIPP”. Available at http://www.census.gov/dusd/MAB/wp218.pdf August. [Google Scholar]
  2. Blewett L A, Good M B, Call K T, Davern M. Monitoring the Uninsured. A State Policy Perspective. Journal of Health Politics, Policy and Law. 2004;29(1):107–45. doi: 10.1215/03616878-29-1-107. [DOI] [PubMed] [Google Scholar]
  3. Blumberg S J, Cynamon M L. “Misreporting Medicaid Enrollment: Results of Three Studies Linking Telephone Surveys to State Administrative Records”. Presented at the Seventh Conference on Health Survey Research Methods; September 24–27, 1999; Williamsburg VA. 1999. [January 19, 2007]. Available at http://www.cdc.gov/nchs/data/slaits/conf07.pdf#Misreporting. [Google Scholar]
  4. Blumberg S, Davis K, Khare M, Martinez M. “The Effect of Survey Follow-up on Nonresponse Bias: Joint Canada/United States Survey of Health, 2002–2003.”. Presentation at the Annual Meeting of the American Association for Public Opinion Research; Miami Beach, FL. 2005. [Google Scholar]
  5. Call K T, Davern M, Blewett L A. Estimates of Health Insurance Coverage. Comparing State Surveys with the Current Population Survey. Health Affairs. 2007;26(1):269–78. doi: 10.1377/hlthaff.26.1.269. [DOI] [PubMed] [Google Scholar]
  6. Call K T, Davidson G, Hall A, Kincheloe J, Blewett L A, Brown E R. Report prepared for the Health Care Financing & Organization. Minneapolis, MN: State Health Access Data Assistance Center; 2006a. “Administrative Data Case Study Report: Sources of Discrepancy between Survey-Based Estimates of Medicaid Coverage and State Administrative Counts.”. [Google Scholar]
  7. Call K T, Davidson G, Nyman R, Davern M, Blewett L A. Report Prepared for the Health Care Financing & Organization. Minneapolis, MN: State Health Access Data Assistance Center; 2006b. “Replication of the Medicaid Undercount Experiment: Survey Sources of Discrepancy between Survey-Based Estimates of Medicaid Coverage and State Administrative Counts.”. [Google Scholar]
  8. Call K T, Davidson G, Sommers A S, Feldman R, Farseth P, Rockwood T. Uncovering the Missing Medicaid Cases and Assessing Their Bias for Estimates of the Uninsured. Inquiry. 2001;38(4):396–408. doi: 10.5034/inquiryjrnl_38.4.396. [DOI] [PubMed] [Google Scholar]
  9. Callahan C M, Mays J W. Working Paper. Annandale, VA: Actuarial Research Corporation; 2005. [January 19, 2007]. “Estimating the Number of Individuals in the U.S. without Health Insurance”. [Google Scholar]
  10. Card D, Hildreth A K G, Shore-Sheppard L D. The Measurement of Medicaid Coverage in the SIPP. Evidence from a Comparison of Matched Records. Journal of Business and Economic Statistics. 2004;2(4):410–20. [Google Scholar]
  11. Congressional Budget Office. Washington, DC: Congressional Budget Office; 2003. [January 19, 2007]. “How Many People Lack Health Insurance and for How Long?”. Available at http://www.cbo.gov/showdoc.cfm?index=4210&sequence=0. [Google Scholar]
  12. Czajka J. “Review of ARC and Urban Institute Adjustments to CPS Medicaid Enrollment.”. Presentation at SHADAC's Meeting, Survey and Administrative Data Sources of the Medicaid Undercount; May 5, 2005; Washington, DC. 2005. [Google Scholar]
  13. Davern M, Call K T, Ziegenfuss J, Davidson G, Beebe T, Blewett L A. Minneapolis, MN: University of Minnesota; 2006a. “Validating Health Insurance Coverage Survey Estimates: A Comparison between Self-Reported Coverage and Administrative Data Records.” Working Paper. [Google Scholar]
  14. Davern M, Call K T, Ziegenfuss J, McAlpine D, Beebe T J. “Are Low Response Rates Hazardous to Your Health?”. Paper presented at the Telephone Survey Methodology II Conference; January 12, 2006; Miami, Florida. 2006b. [Google Scholar]
  15. Davidson G. “Early Results from the Pennsylvania Medicaid Undercount Experiment.”. Presentation at SHADAC's Meeting, Survey and Administrative Data Sources of the Medicaid Undercount; May 5, 2005; Washington, DC. 2005. [Google Scholar]
  16. DeNavas-Walt C, Proctor B D, Lee C H. Current Population Reports, P60-229. Washington, DC: U.S. Census Bureau; 2005. “Income, Poverty, and Health Insurance Coverage in the United States: 2004.”. [Google Scholar]
  17. Dubay L C, Kenney G M. The Effects of Medicaid Expansions on Insurance Coverage of Children. Future of Children. 1996;6(1):152–61. [PubMed] [Google Scholar]
  18. Eberly T, Pohl M, Davis S. The Maryland Current Population Survey Medicaid Undercount Study. Baltimore, MD: UMBC Center for Health Program Development and Management; 2005. [January 19, 2007]. Available at http://www.chpdm.org/publications/CPSSurvey_Report%20July%2025%202005.pdf. [Google Scholar]
  19. Groves R M. Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly. 2006;70(4):646–75. [Google Scholar]
  20. Hoffman C, Holahan J. Issue paper no. 7384. Washington, DC: Kaiser Commission on Medicaid and the Uninsured; 2005. [January 19, 2007]. “What Is the Current Population Survey Telling Us about the Number of Uninsured?”. [Google Scholar]
  21. Holahan J, Winterbottom C, Rajan S. A Shifting Picture of Health Insurance Coverage. Health Affairs. 1995;14(4):253–64. doi: 10.1377/hlthaff.14.4.253. [DOI] [PubMed] [Google Scholar]
  22. Holle R, Hochadel M, Reitmeir P, Meisinger C, Wichman H E. Prolonged Recruitment Efforts in Health Surveys. Epidemiology. 2006;17(6):639–43. doi: 10.1097/01.ede.0000239731.86975.7f. [DOI] [PubMed] [Google Scholar]
  23. Hunter D. Web memo #555. Washington, DC: Heritage Foundation; 2004. [January 19, 2007]. “Counting the Uninsured: Why Congress Should Look Beyond the Census Figures”. Available at http://www.heritage.org/Research/HealthCare/wm555.cfm. [Google Scholar]
  24. Joint Economic Committee. Washington, DC: Joint Economic Committee of the U.S. Congress; 2004. [January 19, 2007]. “The Complex Challenge of the Uninsured”. [Google Scholar]
  25. Keeter S, Kennedy C, Dimock M, Best J, Craighill P. Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey. Public Opinion Quarterly. 2006;70(4):125–48. [Google Scholar]
  26. Keeter S, Kohut A, Miller A, Groves R, Presser S. Consequences of Reducing Non-Response in a Large National Telephone Survey. Public Opinion Quarterly. 2000;64(2):125–48. doi: 10.1086/317759. [DOI] [PubMed] [Google Scholar]
  27. Kincheloe J E, Brown E R, Frates J, Call K T, Yen W, McRae J A. Can We Trust Population Surveys to Count Medicaid Enrollees and the Uninsured? Health Affairs. 2006;25(4):1163–7. doi: 10.1377/hlthaff.25.4.1163. [DOI] [PubMed] [Google Scholar]
  28. Klerman J A, Ringel J S, Roth B. “Under-Reporting of Medicaid and Welfare in the Current Population Survey.”. Working Paper; March 2005; RAND, Santa Monica, CA. 2005. [Google Scholar]
  29. Kreider B, Hill S C. “Partially Identifying Treatment Effects with an Application to Covering the Uninsured”. [January 19, 2007];2006 Working paper. Available at http://www.econ.iastate.edu/faculty/kreider/vita/papers/downloads/kreider-hill.pdf.
  30. Lewis K, Elwood M R, Czajka J. Counting the Uninsured: A Review of the Literature. Washington, DC: The Urban Institute; 1998. [Google Scholar]
  31. Pascale J. “The Role of Questionnaire Design in Medicaid Estimates: Results from an Experiment.”. Paper presented at Washington Statistical Society; March 21, 2001.2001. [Google Scholar]
  32. StataCorp. Stata® Statistical Software, v. 8.2. College Station, TX: StataCorp; 2003. [Google Scholar]
  33. Sudman S, Bradburn N, Schwarz S. Thinking about Answers. San Francisco: Jossey-Bass; 1996. [Google Scholar]
  34. Swartz K, Purcell J. Counting Uninsured Americans. Health Affairs. 1989;8(4):193–7. doi: 10.1377/hlthaff.8.4.193. [DOI] [PubMed] [Google Scholar]
  35. Triplett T. What Is Gained from Additional Call Attempts and Refusal Conversion and What Are the Cost Implications? Washington, DC: The Urban Institute; 2002. [Google Scholar]
  36. Urban Institute. “Transfer Income Model, Version 3”. [January 19, 2007];2006 TRIM 3 project website. Available at http://trim.urban.org/T3Welcome.php.
  37. Washington State Office of Financial Management. “Accounting for Medicare and Medicaid Recipients”. [January 19, 2007];2003 2002 Washington State Population Survey, Research Brief No. 20. Washington State Office of Financial Management website. Available at http://www.ofm.wa.gov/researchbriefs/brief020.pdf.

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES