Abstract
Objective
To compare estimates of health coverage from the pre‐ and post‐ redesign of the Current Population Survey (CPS) Annual Social and Economic Supplement.
Data Sources/Study Setting
The CPS 2013 Content Test.
Study Design
A test of the old and new CPS in which the control panel was a subset of the CPS production cases interviewed by phone and the test panel was conducted in parallel (also by phone) with a sample that had already completed the final rotation of the CPS. Outcome variables tested include uninsured and coverage type by subgroup and calendar year versus point‐in‐time estimates.
Data Collection/Extraction Methods
Census Bureau telephone interviewers.
Principal Findings
The odds of having coverage in the past calendar year were higher under the new than the old CPS. Within the new CPS, calendar year estimates of coverage were higher than and distinct from point‐in‐time estimates. There were few statistically significant differences in coverage across demographic subgroups.
Conclusions
The new method reduced presumed underreporting of past year coverage, and the integrated point‐in‐time/calendar‐year series effectively generated distinct measures of each within the same questionnaire.
Keywords: Insurance, redesign, experiment, measurement error
The U.S. Census Bureau's Current Population Survey Annual Social and Economic Supplement (CPS) is the most widely cited and used source of estimates on health insurance coverage (Blewett and Davern 2006). Well before health reform posed additional measurement challenges, many researchers were critical of the CPS because its estimate of the number of uninsured appeared too high. The chief evidence for this conclusion was that the CPS estimate, which defined the uninsured as those without coverage throughout the calendar year, was on par with other surveys’ estimates of the number of uninsured at a point‐in‐time. By definition, the CPS, which classifies a person as uninsured if he or she lacked coverage for every day in a calendar year, should estimate a smaller number of uninsured compared to a survey that classifies uninsurance as lacking coverage on the date of interview (or any other specific point‐in‐time). The fact that these estimates are close led researchers to assume the CPS was missing out on reports of past coverage. Indeed, the CPS calendar year estimate of the uninsured is higher than most other surveys that measure all‐year coverage. For example, in a comparison of major national surveys, estimates of the uninsured throughout calendar year 2012 were 15.4 percent in the CPS, 12.7 percent in the Medical Expenditure Panel Survey, and 11.1 percent in the National Health Interview Survey (NHIS) (State Health Access Data Assistance Center [SHADAC] 2013).
Given these divergent estimates, a comprehensive research agenda has been underway at the Census Bureau since 1999 to examine and reduce measurement error associated with health insurance estimates from the CPS questionnaire. Research activities included an extensive and ongoing literature review, multiple rounds of cognitive testing and behavior coding, interviewer and respondent debriefings, split‐ballot field tests, and record‐check studies (Hess et al. 2001; Pascale 2001a, 2001b, 2004, 2008, 2009; Pascale et al. 2009). This research enterprise demonstrated that there were three key features of the CPS questionnaire that were associated with measurement error. First was the calendar year reference period, combined with the 3‐month lag between the end of the reference period (December) and the interview date (April–February of the subsequent year). Second was the household‐level design, in which household members were asked about coverage in general terms (“Was anyone in the household covered by…”) rather than specific terms (“Was [NAME] covered by…”). Third was the structure of the questionnaire regarding source of coverage. The CPS asks a series of eight yes/no questions pertaining to specific sources of coverage (employment, Medicare, Medicaid, etc.). This laundry‐list approach was problematic for a number of reasons. For example, respondents often did not know the status of other household members’ coverage at that level of detail, they confused one plan type for another, and they reported the same plan more than once.
After more than a decade of research on the character of measurement error in the CPS, a fundamental redesign of the health insurance module was developed which addressed each problematic feature of the questionnaire (Pascale 2014). With regard to the reference period, questions ask first about coverage on the day of the interview. Follow‐up questions determine when the coverage started, and probe for any gaps in coverage from January of the prior year up to and including the present (in total, a 15‐month continuous time period). The result is an integrated set of questions on both calendar‐year and point‐in‐time coverage that renders the same data as the old CPS, as well as person‐plan‐month level variables. Regarding the household‐level design, the new CPS employs a hybrid person‐household approach in which each person is asked about by name, but once a particular plan or plan type is identified, questions are asked to determine which other household members are also covered by that same plan. This information is harnessed so that the question series on subsequent household members is much abbreviated for any member already mentioned as covered. On coverage type, the redesign starts with a single yes/no question on coverage status, and then determines general source of coverage (job, government/state, other) and follow‐up questions tailored to each general source capture the necessary detail (policyholder/dependents, type of government plan, etc.).
While the purpose of the CPS redesign was reducing measurement error, during its development the Affordable Care Act was implemented and adaptations to the redesign were incorporated (Pascale et al. 2013). In March 2013, the Census Bureau conducted the CPS ASEC 2013 Content Test—a large‐scale field test comparing the redesign to the status quo—and results were favorable both from an operations standpoint and in terms of the estimates. Thus, the redesigned CPS was launched into full‐scale production in early spring of 2014 and was used to produce estimates for calendar year 2013. Estimates for calendar year 2013 and 2014 were produced using identical methodology, enabling a clean analysis of the effects of the ACA in 2014 and beyond. The adoption of the CPS redesign in 2014 does, however, signify a break in series; estimates for calendar year 2013 (based on the new instrument) will not be directly comparable to estimates for calendar year 2012 (based on the old instrument).
This paper focuses on the 2013 test results and has three main objectives. First is to examine whether the redesign reduced presumed measurement error by generating more reports of past coverage than the traditional CPS. A limitation of this analysis (and all such studies that seek to measure the uninsured) is that it lacks a gold standard—a single, comprehensive, accurate source of data on those with and without coverage. Years of experience with the old CPS instrument suggest that it produces too few reports of past coverage and an upwardly biased estimate of the uninsured. Therefore, in this analysis, we interpreted more reports of coverage as suggesting less biased measurement. Our second objective is to assess whether the CPS redesign estimates of calendar‐year coverage are higher than and distinct from its point‐in‐time estimates. The third objective is to compare estimates from the new and old CPS design, by detailed plan type and by subgroup, to determine if these coverage types and subgroups were differentially affected by the new questionnaire.
Methods
The CPS is a monthly labor survey of the civilian noninstitutional population. Interviews are conducted in person or through telephone interviewing. The survey is based on a rotating panel design in which households are interviewed once a month for 4 consecutive months, are dormant for 8 months, and then in sample for another four consecutive months, for a total time span of 16 months in sample. In February through April of each year, the basic monthly questionnaire is supplemented with additional questions on income and health insurance (the ASEC).
The 2013 test compared the old (“control”) and new (“test”) CPS health insurance modules. In the test panel (n = 16,401 individuals), the new questions on health insurance were embedded within the full CPS ASEC questionnaire and administered to “retired” CPS sample—households that had completed the full 16‐month CPS series and were no longer participating in the CPS. Households that had previously participated in the ASEC during that 16‐month period were excluded from the test panel. All test panel interviews were conducted in March 2013 by experienced CPS interviewers. For cost reasons, the mode was limited to computer‐assisted telephone interviewing (CATI). The control panel was selected from among regular production CPS ASEC interviews—that is, those interviews being conducted as part of official data collection, using the old health insurance questions. To create a comparison group that matched the test panel conditions as closely as possible, only cases from production that were also conducted via CATI in March 2013 (n = 13,228 individuals) were used in analysis (see Figure S1 for a visual display of sample selection).
The household response rate to the basic monthly interview component was 90.7 percent in the control panel and 43.1 percent among the test panel (Hornick, 2013). The divergent response rates are concerning if they signal that test and control panels differ in systematic ways that also influence health insurance. Indeed, a nonresponse analysis found that age, education, and household size all appeared to drive differential response (Brault, 2014). One possible explanation for the relatively high nonresponse in the test panel is that test households had participated in eight rounds of the CPS and were told at the end of their last interview that they were finished with the CPS. Although training prepared interviewers to explain why they were calling back, interviewers in the test panel were at a distinct disadvantage at gaining cooperation relative to their control panel counterparts. The control panel was fairly evenly distributed across the 8 months in sample and none had been told their CPS panel was concluded.
In an effort to achieve covariate balance between test and control panels, both samples were separately weighted to a common set of control totals. Poststratification adjustments for age, sex, and race/ethnicity were conducted to reduce coverage and nonresponse bias (Hornick, 2013). The choice of poststratification variables was driven by what is currently used in the production CPS. While this decision allowed us to mimic the regular production environment to the extent possible, it meant that weighting did not consider other informative variables such as household size or education that are important to health insurance status and reporting behavior. Furthermore, because an experimental version of income questions was being tested at the same time, we were unable to incorporate income into the weights.
The 2013 content test was fielded to test several questionnaire changes, not just the new health insurance series. As a result, a single set of weights was created for analysis of multiple sections of the CPS questionnaire, using the full set of cases that completed the CPS basic interview. However, some of the respondents who completed the CPS basic interview section dropped out of the ASEC before they completed the entire health insurance module, and these cases were dropped from this analysis. For this reason, the weights in this analysis do not sum to the total U.S. population. After dropping test cases that did not complete the health insurance module, a second round of poststratification adjustments was conducted on the control panel. These final adjustments controlled the control panel weights to the sum of weights from the analytical sample in the test panel. All analyses we report are weighted and standard errors account for the complexity of the sample design using successive difference replication (Fay and Train, 1995).
Table 1 compares weighted demographic and economic characteristics between the test and control panels. The table demonstrates that, after weighting, there were few observable differences between the two panels. Education was the only variable that showed a statistically significant association (at the p = .05 level) with treatment condition. The similarity of the two samples along observable dimensions, after weighting, gives us confidence that the unconditional treatment effects we observe will reflect the impact of the new health insurance module relative to the old instrument, and not a third confounding variable. However, in addition to unadjusted differences, we also report results from multivariate regression models that control for all variables listed in Table 1.
Table 1.
New CPS Design | Old CPS Design | New − Old | p | |||
---|---|---|---|---|---|---|
% | SE of % | % | SE of % | |||
Population size | N = 243, 474,924 | N = 243, 474,924 | n/a | |||
Age | ||||||
0–18 | 25.19 | 0.381 | 24.65 | 0.5599 | 0.53 | .9158 |
19–34 | 19.68 | 1.2126 | 20.21 | 0.5194 | −0.53 | |
35–64 | 40.40 | 0.658 | 40.40 | 0.5782 | 0.00 | |
65+ | 14.73 | 0.39 | 14.73 | 0.427 | 0.00 | |
Sex | ||||||
Female | 51.42 | 0.5719 | 51.42 | 0.4419 | 0.00 | 1.0000 |
Male | 48.58 | 0.5719 | 48.58 | 0.4419 | 0.00 | |
Race/ethnicity | ||||||
White/non‐Hispanic | 65.69 | 0.558 | 65.69 | 0.9865 | 0.00 | 1.0000 |
Black/non‐Hispanic | 10.91 | 0.4783 | 10.91 | 0.7295 | 0.00 | |
Other/non‐Hispanic | 8.05 | 0.2928 | 8.05 | 0.5763 | 0.00 | |
Hispanic | 15.34 | 0.451 | 15.34 | 0.8231 | 0.00 | |
Education | ||||||
Less than high school | 16.25 | 0.4856 | 14.32 | 0.555 | 1.93 | .0258 |
High school graduate | 25.00 | 0.6249 | 25.79 | 0.6248 | −0.78 | |
Some college thru AA | 26.63 | 0.6815 | 28.25 | 0.5784 | −1.62 | |
Bachelors or higher | 32.12 | 0.6501 | 31.65 | 0.7294 | 0.47 | |
Work status | ||||||
Full time/full year | 40.47 | 0.5351 | 38.98 | 0.688 | 1.48 | .1487 |
Less than full time/full year | 37.07 | 0.62 | 38.86 | 0.6764 | −1.79 | |
Did not work | 22.46 | 0.6984 | 22.15 | 0.4665 | 0.31 | |
Citizenship | ||||||
U.S. citizen | 93.65 | 0.413 | 94.61 | 0.3984 | −0.96 | .1061 |
Non‐U.S. citizen | 6.35 | 0.413 | 5.39 | 0.3984 | 0.96 | |
Self/proxy | ||||||
Self | 39.41 | 0.4916 | 40.42 | 0.4181 | −1.01 | .1037 |
Proxy | 60.59 | 0.4916 | 59.58 | 0.4181 | 1.01 | |
Household relationship | ||||||
Household respondent | 39.41 | 0.4916 | 40.42 | 0.4181 | −1.01 | .1170 |
Child of reference person | 30.45 | 0.7919 | 28.95 | 0.5415 | 1.51 | |
Other | 30.14 | 0.4475 | 30.64 | 0.396 | −0.50 | |
Household size | ||||||
1–4 person household | 81.17 | 0.9723 | 82.20 | 0.9848 | −1.03 | .4495 |
5+ person household | 18.83 | 0.9723 | 17.80 | 0.9848 | 1.03 |
N is the weighted population count. Unweighted n = 29,629 (Test n = 16,401; Control n = 13,228). Chi‐square p‐values are reported. Boldface values indicate statistical significance at p = .05.
Source: 2013 CPS ASEC Content Test.
Even after controlling for observed differences, our comparisons of interest could be biased by unobserved characteristics that were associated with membership in the test condition and the outcomes of interest. In the test panel, respondents conducted their final CPS production interview 1 to 2 years prior to the content test, and, for the household to be eligible, respondents needed to be reachable at the same phone number and to live at the same address as their final CPS production interview. Thus, the retired test sample could be biased toward more stable households and/or it could be subject to a larger degree of panel conditioning.
Though there is no perfect way to control for this difference, we take advantage of unique properties of the CPS ASEC sample design to gauge the likelihood that our results are influenced by the “retired‐ness” of the test panel. The CPS ASEC production sample, which we use to form the control panel, consists of several different components, including a retired sample that was added to produce more precise estimates of uninsurance among children (Davern et al, 2003). Because this sample was added for measurement of a particular subpopulation, the demographic composition is atypical and includes only households where the householder is white, non‐Hispanic, and at least one child under 19 years old lives in the household OR the race/ethnicity of the householder is non‐white and non‐Hispanic. This sample provides a chance to create a comparison group of cases that received the old CPS and were also from the retired sample. Among the retired production cases, 10,296 were interviewed via CATI in March 2013. Among the 16,401 test cases, 5,818 cases met the same control criteria based on race of householder and presence of children described above.
Our analysis proceeds as follows. We use chi‐squared tests of association to test the statistical significance of bivariate comparisons of health insurance plan type across the test and control panels overall and for selected subgroups. We present results from the full dataset and for the retired subset described above. We also compare the point‐in‐time and calendar‐year coverage measures within the test panel. After examining bivariate comparisons, we then examine adjusted comparisons using logistic regression to control for demographics (age, race/ethnicity, sex, and the marital, education, and work status of the head), household characteristics (number of residents, owned/rented), and relationship with the household respondent (self, child, other), not all of which were included in weighting. We report the results using odds ratios, and we use a significance threshold of 0.05 for all analyses.
Results
Calendar‐Year Estimates: Old versus New CPS
Table 2 displays the test/control differences in both detailed and aggregated coverage types. We display the coverage type definitions used by the old CPS (see below for more detail). Both instruments allow for multiple coverage type reports. Our measures in Table 2 represent that a given coverage type was reported, regardless of other coverage types that may have also been reported (however, the data rendered by the new CPS enable analysts to categorize each coverage type alone or in combination with other coverage types). The test resulted in higher estimates of ESI than the control by 3.91 percentage points (p < .0021), and the control resulted in higher estimates of coverage from someone outside the household by almost a percentage point (p < .0060), and more military coverage by 1.82 percentage points (p = .0001). We found no significant differences in reporting of directly purchased coverage, or Medicaid or Medicare—as individual categories or combined with each other.
Table 2.
New CPS Design | Old CPS Design | New − Old | p | |||
---|---|---|---|---|---|---|
% | SE of % | % | SE of % | |||
Population size | N = 243, 474,924 | N = 243, 474,924 | ||||
Individual coverage types | ||||||
ESI | 61.93 | 0.7658 | 58.02 | 0.9839 | 3.91 | <.0021 |
Direct purchase | 9.92 | 0.4321 | 9.76 | 0.4222 | 0.16 | .7940 |
Outside household | 2.31 | 0.2268 | 3.24 | 0.2410 | −0.93 | <.0060 |
Medicaid/CHIP/other government | 11.87 | 0.5732 | 13.22 | 0.6887 | −1.35 | .1203 |
Medicare | 15.74 | 0.3811 | 16.21 | 0.4736 | −0.46 | .4458 |
Military | 3.13 | 0.2407 | 4.95 | 0.3736 | −1.82 | .0001 |
Aggregated coverage types | ||||||
ESI and/or military | 64.29 | 0.7651 | 61.17 | 1.0017 | 3.12 | .0148 |
ESI and/or military and/or outside household | 66.19 | 0.7655 | 63.88 | 0.9992 | 2.31 | .0688 |
ESI and/or direct and/or military and/or outside household | 74.40 | 0.7491 | 72.08 | 0.8891 | 2.32 | .0506 |
Medicare and/or Medicaid | 26.42 | 0.6487 | 27.84 | 0.7849 | −1.42 | .1546 |
ESI and/or direct and/or outside household (private) | 72.46 | 0.7496 | 69.38 | 0.8687 | 3.08 | .0082 |
Medicaid/CHIP/other govt, Medicare, military (public) | 28.46 | 0.6895 | 31.17 | 0.8000 | −2.71 | .0099 |
Private and public | 11.51 | 0.3436 | 12.65 | 0.4361 | −1.14 | .0373 |
Insured | 89.39 | 0.5112 | 88.02 | 0.5200 | 1.37 | .0795 |
N is the weighted population count. Unweighted n = 29,629 (Test n = 16,401; Control n = 13,228). Chi‐square p‐values are reported. Bold text indicates statistical significance at the 0.05 level.
Source: 2013 CPS ASEC Content Test.
Though all the categories of coverage shown are directly comparable, it is important to note differences in the way the old and new CPS question series go about capturing coverage from someone outside the household and military coverage. The old CPS asks about ESI and directly purchased coverage, but it also asks an explicit question about coverage from someone outside the household (even though this could overlap with coverage already reported as ESI or directly purchased). In the new CPS, first the source is determined (ESI or direct purchase), and then respondents are asked who the policyholder is, at which point they can report “someone outside the household.” Reporting of military coverage is similar. The old CPS asks about ESI coverage (which could prompt reports of military coverage from those who regard the military as their employer), and a later question asks specifically about military coverage. In the new CPS, once ESI coverage is established, a question is asked to determine if that coverage is related to military service in any way (note that respondents can also characterize their military coverage as “government” and later questions establish whether it is military). In both cases, the new CPS renders two pieces of information about the same plan, allowing analysts to categorize it as they see fit. In the old CPS, there are no restrictions on how many plans respondents may choose and no edits to help determine whether coverage reported as outside‐household or military coverage is the same as a plan already reported, which could invite double‐reporting of the same plan twice.
To keep comparisons parallel, Table 2 categorizes any coverage reported in the new CPS as outside‐household as such, even if it was also known whether it was ESI or direct purchase coverage. The vast majority (85 percent) of outside‐household coverage from the new CPS was reported as ESI, with the remainder reported as directly purchased. Table 3 (discussed more below) allocates this outside‐household coverage to its more specific source. Under these conditions, the gap in ESI reporting under the new versus old CPS is even greater—63.47 percent in the new CPS versus 58.02 percent in the old CPS—and significant (p < .0001). For directly purchased coverage, the gap does widen—10.13 percent in the new CPS versus 9.76 in the old CPS—but the difference is not significant (p = .5431).
Table 3.
Calendar‐Year | Point‐in‐Time | Calendar‐Year − Point‐in‐Time | |||||
---|---|---|---|---|---|---|---|
% | SE of % | % | SE of % | % Diff | SE of Diff | p‐value | |
N = 243,474,924 | N = 243,474,924 | ||||||
Individual coverage types | |||||||
ESI | 63.47 | 0.7783 | 61.31 | 0.8084 | 2.16 | 0.2638 | <.0001 |
Direct purchase | 10.13 | 0.4319 | 9.86 | 0.4290 | 0.27 | 0.1232 | .0285 |
Medicaid/CHIP/other government | 11.87 | 0.5732 | 11.21 | 0.5739 | 0.66 | 0.1610 | <.0001 |
Medicare | 15.74 | 0.3811 | 16.17 | 0.3917 | −0.43 | 0.0774 | <.0001 |
Military | 3.28 | 0.2490 | 3.21 | 0.2524 | 0.07 | 0.0528 | .1868 |
Aggregated coverage types | |||||||
ESI, Direct, Outside household | 72.46 | 0.7497 | 70.66 | 0.8025 | 1.65 | 0.2562 | <.0001 |
Medicare and/or Medicaid | 26.42 | 0.6487 | 26.19 | 0.6567 | 0.23 | 0.1776 | .1971 |
Medicaid/CHIP/other government, Medicare, Military | 28.46 | 0.6895 | 28.29 | 0.7069 | 0.31 | 0.1853 | .0900 |
Insured | 89.39 | 0.5112 | 87.98 | 0.5674 | 1.41 | 0.2622 | <.0001 |
N is weighted population count. Unweighted n = 29,629 (Test n = 16,401; Control n = 13,228). p‐values are based on a paired t‐test. Bold text indicates statistical significance at the 0.05 level.
Source: 2013 CPS ASEC Content Test.
To reduce the effects of any double‐reporting of coverage types, in Table 2 we aggregate ESI, military, and out‐of‐household coverage, and the test estimate is still higher (by 2.31 percentage points) but did not reach statistical significance (p = .0688). We display other permutations of aggregated coverage so readers can make their own judgments about the net effects of possible overreporting. We also note that military coverage is characterized as private in some surveys and public in others. The lower lines of Table 2 display the coverage estimates as aggregated in the old CPS. The ESI/directly purchased/out‐of‐household estimate is higher in the new than the old CPS by 3.08 percentage points, while the estimate of Medicaid/Medicare/military coverage is higher in the old CPS than the new by 2.71 percentage points. Results for just the nonelderly test/control comparisons, displayed in Table S1, show similar differences.
New CPS Calendar‐Year versus Point‐in‐Time Estimates
Within the test panel, estimates of coverage at a point‐in‐time (in this case, the day of the interview at some point in March 2013) were compared to estimates of coverage at any point in calendar year 2012 using a paired t‐test. A test of whether the integrated point‐in‐time/calendar‐year questions in the new CPS are effective is whether the calendar‐year estimate is higher than the point‐in‐time estimate. Table 3 shows this is the case, with one exception. For Medicare, the point‐in‐time estimate was higher than the calendar‐year estimate, but this was expected. Once enrolled in Medicare, with very few exceptions, enrollment is for life. The point‐in‐time measure should capture both those enrolled in the previous year and those who have recently enrolled, while the calendar‐year measure would exclude the recently enrolled.
Calendar‐Year Estimates: Old versus New CPS across Domains and Subgroups
In Table 4, we present comparisons of aggregated coverage type by subgroup. In this case, because we are comparing estimates within the new CPS, we opt to categorize coverage from someone outside of the household to its more specific plan type (ESI or direct purchase). Private coverage includes ESI, direct purchase, and coverage from someone outside the household. Public coverage includes Medicaid and other government‐assistance programs, but leaves out Medicare and military to focus on primarily means‐tested programs. For the most part, there were few statistically significant test/control differences by subgroup. The test estimates of private coverage were higher than the control for white, non‐Hispanics (by 3.54 percentage points), those working less than full time/full year (by 3.42 percentage points), and for U.S. citizens (by 3.07 percentage points), and all were significant at the 95 percent confidence level. For public coverage, the only statistically significant difference was for white, non‐Hispanics, where the control estimate was 1.54 percentage points higher than the test estimate. For insured overall, results show some of the same patterns as for private. The test estimate of insured was higher for non‐Hispanic whites by 1.46 percentage points, for those with some college up to an associates’ degree by 2.82 percentage points, and for U.S. citizens by 1.49 percentage points.
Table 4.
Private | Public | Insured | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
New CPS | Old CPS | New − Old | p | New CPS | Old CPS | New − Old | p | New CPS | Old CPS | New − Old | p | |||||||
% | SE of % | % | SE of % | % | SE of % | % | SE of % | % | SE of % | % | SE of % | |||||||
72.31 | 0.75 | 69.38 | 0.87 | 2.93 | 0.01 | 11.87 | 0.57 | 13.22 | 0.69 | −1.35 | 0.12 | 89.39 | 0.51 | 88.02 | 0.52 | 1.37 | .08 | |
Age | ||||||||||||||||||
0–18 | 66.20 | 1.44 | 61.95 | 1.80 | 4.25 | 0.06 | 29.40 | 1.56 | 33.22 | 1.80 | −3.82 | 0.08 | 93.91 | 0.84 | 92.84 | 0.83 | 1.08 | .39 |
19–34 | 70.66 | 1.59 | 67.41 | 1.65 | 3.25 | 0.15 | 8.65 | 1.00 | 8.50 | 1.10 | 0.15 | 0.92 | 80.20 | 1.35 | 77.33 | 1.36 | 2.87 | .13 |
35–64 | 78.33 | 0.88 | 76.42 | 0.84 | 1.91 | 0.12 | 5.20 | 0.46 | 6.13 | 0.48 | −0.93 | 0.18 | 87.72 | 0.66 | 86.72 | 0.69 | 1.00 | .29 |
65+ | 68.45 | 1.20 | 65.21 | 1.27 | 3.24 | 0.08 | 4.51 | 0.57 | 5.67 | 0.58 | −1.16 | 0.17 | 98.51 | 0.37 | 98.20 | 0.35 | 0.31 | .55 |
Race/Ethnicity | ||||||||||||||||||
White/non‐Hispanic | 81.07 | 0.70 | 77.53 | 0.70 | 3.54 | 0.00 | 6.67 | 0.55 | 8.21 | 0.48 | −1.54 | 0.02 | 93.55 | 0.43 | 92.09 | 0.42 | 1.46 | .02 |
Black/non‐Hispanic | 60.00 | 2.69 | 52.81 | 3.56 | 7.20 | 0.12 | 22.00 | 2.06 | 28.62 | 3.20 | −6.63 | 0.09 | 88.72 | 1.40 | 86.77 | 1.61 | 1.96 | .37 |
Other/non‐Hispanic | 71.06 | 3.51 | 71.63 | 2.82 | −0.57 | 0.90 | 10.75 | 2.09 | 13.05 | 2.21 | −2.30 | 0.45 | 85.88 | 2.80 | 87.56 | 1.70 | −1.68 | .59 |
Hispanic | 44.19 | 2.57 | 45.08 | 2.42 | −0.90 | 0.79 | 27.54 | 2.06 | 23.79 | 1.96 | 3.75 | 0.16 | 73.90 | 1.96 | 71.73 | 2.01 | 2.17 | .47 |
Education | ||||||||||||||||||
<High school | 50.33 | 1.97 | 46.25 | 2.25 | 4.08 | 0.18 | 21.82 | 1.76 | 24.62 | 2.07 | −2.80 | 0.30 | 78.07 | 1.64 | 78.26 | 2.24 | −0.19 | .95 |
High school grad | 67.31 | 1.33 | 63.71 | 1.36 | 3.59 | 0.07 | 7.80 | 0.86 | 8.39 | 0.80 | −0.59 | 0.63 | 84.88 | 1.02 | 82.58 | 1.07 | 2.29 | .13 |
Some college‐AA | 75.54 | 1.17 | 72.68 | 1.21 | 2.87 | 0.07 | 5.15 | 0.67 | 5.93 | 0.60 | −0.78 | 0.40 | 88.88 | 0.91 | 86.06 | 0.94 | 2.82 | .03 |
Bachelors+ | 89.09 | 0.76 | 87.98 | 0.87 | 1.12 | 0.36 | 1.35 | 0.30 | 1.81 | 0.28 | −0.47 | 0.27 | 94.76 | 0.61 | 94.46 | 0.59 | 0.30 | .74 |
Work status | ||||||||||||||||||
Full time/full yr | 88.01 | 0.78 | 87.75 | 0.71 | 0.25 | 0.81 | 1.20 | 0.24 | 1.31 | 0.28 | −0.12 | 0.76 | 90.92 | 0.65 | 90.68 | 0.62 | 0.24 | .78 |
<Full time/full yr | 59.58 | 1.16 | 56.16 | 1.12 | 3.42 | 0.04 | 13.93 | 0.94 | 15.27 | 1.01 | −1.33 | 0.34 | 87.77 | 0.92 | 86.53 | 0.85 | 1.24 | .33 |
Did not work | 71.41 | 1.35 | 69.46 | 1.36 | 1.95 | 0.32 | 7.35 | 0.75 | 6.74 | 0.70 | 0.61 | 0.55 | 83.18 | 1.08 | 80.03 | 1.28 | 3.15 | .06 |
Citizenship | ||||||||||||||||||
U.S. Citizen | 73.94 | 0.75 | 70.87 | 0.83 | 3.07 | 0.01 | 11.90 | 0.59 | 13.24 | 0.71 | −1.34 | 0.13 | 91.33 | 0.46 | 89.84 | 0.41 | 1.49 | .02 |
Non‐U.S. citizen | 48.22 | 3.05 | 43.21 | 4.17 | 5.01 | 0.33 | 11.47 | 1.72 | 12.83 | 2.89 | −1.36 | 0.67 | 60.88 | 2.94 | 56.19 | 4.00 | 4.69 | .35 |
Unweighted n = 29,629 (Test n = 16,401; Control n = 13,228). Private coverage includes ESI, directly purchased, and out‐of‐household coverage. Public includes Medicaid, CHIP, and other government‐assistance coverage. Chi‐square p‐values are reported. Bold text indicates statistical significance at the 0.05 level.
Source: 2013 CPS ASEC Content Test.
Logistic Regression Models
Table 5 reports results from logistic regression models predicting private, public, and any coverage (“Insured”). We report only the odds ratios on the treatment indicator, but a full set of model results is in Tables S2–S4.
Table 5.
Private | Public | Insured | |||||||
---|---|---|---|---|---|---|---|---|---|
Odds Ratio | SE | p | Odds Ratio | SE | p | Odds Ratio | SE | p | |
Overall sample | |||||||||
Model 1 | 1.152 | 0.0651 | 0.0120 | 0.884 | 0.0705 | 0.1230 | 1.147 | 0.0889 | .0778 |
Model 2 | 1.132 | 0.0731 | 0.0549 | 0.916 | 0.0882 | 0.3639 | 1.158 | 0.0928 | .0678 |
Model 3 | 1.128 | 0.0725 | 0.0604 | 0.923 | 0.0882 | 0.4021 | 1.171 | 0.0939 | .0495 |
Retired sample | |||||||||
Model 1 | 1.17 | 0.0926 | 0.0536 | 0.90 | 0.0923 | 0.32 | 1.19 | 0.1280 | .1009 |
Model 2 | 1.33 | 0.1387 | 0.0065 | 0.81 | 0.1000 | 0.09 | 1.34 | 0.1598 | .0130 |
Model 3 | 1.31 | 0.1373 | 0.0091 | 0.81 | 0.0997 | 0.09 | 1.36 | 0.1603 | .0103 |
Each parameter is from a separate regression. Private coverage includes ESI, directly purchased, and out‐of‐household coverage. Public includes Medicaid, CHIP, and other government‐assistance coverage. Model 1 reports the unadjusted odds ratios. Model 2 controls for age, race/ethnicity, sex, household tenure (owned/rented), and the marital, education, and work status of the head of household. Model 3 controls for Model 2 covariates and relationship with household respondent (self, child, other) and household size. Complete model results are produced in the appendix. OR (odds ratio) is the odds of coverage from the new instrument relative to the odds of coverage from the old instrument. Main unweighted sample n = 29,629 (Test n = 16,401; Control n = 13,228); Retired unweighted control sample n = 10,296; test n = 5,818. All estimates are weighted. Bold text indicates statistical significance at the 0.05 level.
Source: 2013 CPS ASEC Content Test.
Model 1 predicts these outcomes using only the treatment variable as a predictor. Model 2 includes the treatment variable and controls for demographic characteristics typically correlated with insurance coverage: age, race/ethnicity, sex, household tenure (status of owning/renting), and the marital, education, and work status of the head of household. Note that we do not include income in the model because the 2013 content test also contained an experimental set of questions on income, so the test and control panels were not comparable on that measure. Model 3 includes all the variables from Model 2 as well as household size and relationship with the household respondent as these characteristics are directly tied to features that differ between the old and new questionnaires.
The top panel of Table 5 shows that in Model 1, the odds of private coverage were 1.15 times higher in the test than the control (p = .012), and for the insured overall, the odds ratio was similar in magnitude and in the same direction, but not significant (p = .0778). For Model 2, the results were similar to Model 1, but the coefficient of interest in the private coverage regression was estimated with less precision. Model 3 results show the same patterns. The odds of insurance were 1.17 times higher in the test panel than in the control panel (p = .0495), controlling for demographics, household size, and relationship with household respondent. For private coverage, the odds were 1.13 times higher in the test than the control, but the difference did not reach statistical significance (p = .0604). For public coverage, no test/control differences were significant. The lack of significant movement of the coefficient of interest between Model 1 and Model 3 corroborates the results in Tables 2 and 4 and suggests that the sample weights adequately adjust for covariate imbalance.
Retired Test versus Control Estimates
Demographics
The demographic and economic characteristics of the retired component of the control sample compared with a similarly selected subset of the test panel (only households where the head is white, non‐Hispanic and there are children under 19 present, or the household head is non‐white, non‐Hispanic) are shown in Table S5. The covariates were largely balanced across panels, but there were some differences within the retired subsample (race and age) that were not observed in Table 1.
Coverage Estimates
The test/control bivariate differences in insurance coverage among the retired subset were very similar to the test/control differences reported in Table 4 (see Table S6). As in Table 4, there were few significant differences within subgroups. For private coverage, estimates from the test panel were higher than the control by 2.93 percentage points (significant at the 5 percent level), and within the retired subsample the difference was similar in magnitude (2.87 percentage points) but not reach statistical significance (p = .0519).
The bottom panel of Table 5 shows logistic model results for retired subsample comparisons. The magnitude of the odds ratios and the direction are similar to the full sample comparisons shown in the top panel of Table 5. In the most saturated model (Model 3), the odds of private coverage were 1.31 times higher in the test versus the control panel (p = .0091), and the odds of any coverage were 1.36 times higher in the test compared with the control panel (p = .0103). The similarity of findings in the full sample (top panel) and the retired subsamples (bottom panel) provide suggestive evidence that the potential bias introduced by the unequal sample designs between the test and control panels is adequately addressed by the sample weights.
Discussion
This analysis set out to describe estimates from the redesigned CPS health insurance instrument in relation to the old instrument in the context of the 2013 Content Test. Our results indicate that the odds of being insured at some point during the previous calendar year were higher in the new versus the old CPS, and the integrated calendar‐year/point‐in‐time question series in the new CPS generated estimates of past calendar‐year coverage that were higher than and distinct from estimates of coverage at a point‐in‐time. Given that previous research on the CPS suggests that the old instrument produces downwardly biased estimates of the uninsured, our results suggest that the revised questionnaire resulted in a more valid measure. Bivariate analysis of aggregated public, private, and uninsured estimates showed statistically significant test/control differences in only seven subgroups, and in six of these instances, the estimates were higher in the test.
We had two primary limitations. The first was that the test and control arms came from separate samples that used separate sampling designs, were drawn from separate sampling frames, and had substantially different response rates. The consistency of our findings across three separate analytic approaches that adjust for confounding differences across treatment conditions (weighting, regression, and retired subgroup analyses) lends support to the internal validity of our results. Our second primary limitation is that we lacked a true gold standard and relied on the assumption (based on prior research) that more reports of coverage signaled higher measurement validity. To address this concern, especially as it relates to the new CPS, the Census Bureau, in collaboration with a number of other agencies and research partners, is conducting a study of survey responses based on a sample drawn from enrollment records in which insurance type (i.e., ESI, Medicaid, and nongroup within and outside the marketplace) is known a priori. The project will examine reporting error in both the new CPS and the American Community Survey, two of the nation's most important data assets.
While our results showed positive results for the new CPS instrument in regards to private coverage and overall insurance status, the lack of evidence that the new CPS generated higher estimates of public coverage than the control is disappointing given evidence of persistent underreporting of Medicaid in surveys (Lewis, Ellwood, and Czajka 1998; Call et al. 2008; Eberly, Pohl, and Davis 2008; Klerman et al. 2009). While the content test suggests that the new design does not substantially improve underreporting of Medicaid, there is some evidence that it does reduce another problem. Previous record‐check studies have found that the old CPS includes overreporting of public coverage which often occurs as double‐reporting of private and public coverage (Davern et al. 2008; Boudreaux et al. 2013). Among the nonelderly who reported at least one type of coverage, the rate of double‐reporting of both public and private coverage was 2.11 percentage points higher in the control than the test (p < .001). A separate study of the new CPS found evidence of Medicare overreporting in both the old and new CPS, but the levels were more pronounced in the old CPS (Resnick 2013). It is possible that overreporting of public coverage in the old CPS offsets underreporting of Medicaid to some degree, and that the CPS redesign has lower levels of overreporting. The result would be a net test/control difference that appears to favor the old CPS but overall measurement error—including both under‐ and overreporting—is higher in the old CPS. Future record‐check studies that match responses to the new CPS to administrative records are needed to more fully evaluate reporting of public coverage.
In September of 2014, the Census Bureau released estimates from the 2014 CPS, which fielded the redesigned health insurance module and collected data about calendar year 2013. As of this writing (December 2014), the only publically available estimates from the 2014 CPS have a number of important limitations. At the same time that the redesigned health insurance questions were implemented, the Census Bureau tested out a new series of income questions in three‐eighths of the sample. As of this writing, the Census has only released data and estimates for the five‐eighths of the sample that received the old income questions. Furthermore, they used the data processing routine that was developed for the old health insurance questionnaire on the new data. Thus, at this time, it is unclear how much or in what direction published estimates will change when the full dataset is released and data processing algorithms are updated. With those caveats in mind, published estimates from the 2014 CPS suggest an all‐year uninsured rate for calendar year 2013 of 13.4 percent. This compares to a 2012 rate, based on the old instrument fielded in 2013, of 15.4 percent. Part of this 2 percentage point difference reflects real shifts in the population between 2012 and 2013, and part of the difference results from the measurement change. One approach to isolating the measurement component is to estimate real population change from other surveys and subtract that out of the CPS 2012–2013 difference. While other surveys should not be viewed as a gold standard of population characteristics, they give us the chance to approximate population change. The NHIS estimated the uninsured rate dropped slightly—from 11.1 percent in 2012 to 10.7 percent in 2013 (authors’ tabulation). Subtracting out the NHIS 0.4 percentage point change from the CPS two percentage point change yields a difference of 1.6 percentage points attributable to measurement change. While there is considerable uncertainty around the size and direction of the year‐over‐year change given the caveats above, this result is broadly consistent with the 1.4 percentage point measurement effect we estimated in the content test.
While the discrepancy between the 2013 all‐year uninsured rate from the CPS and NHIS is indeed substantial (13.4 vs. 10.7, respectively), it represents a 63 percent decline in the difference between the 2012 all‐year uninsured rates estimated by the old CPS instrument and the NHIS (15.4 vs. 11.1, respectively). Furthermore, the point‐in‐time estimate from the redesigned instrument fielded in 2014 was broadly comparable to the NHIS point‐in‐time estimate from the first quarter of 2014 (13.8 vs. 13.1, respectively) (National Health Interview Survey Early Release Program 2014). When final data are released from the 2014 CPS, more rigorous comparisons can be made within the CPS over time and between the CPS and other federal data.
An important finding from our paper is that the old and new CPS are not directly comparable. While this is a positive outcome, as the revision was meant to improve the measure, it also means that the revised series, which was first implemented in 2014, creates a break in series. Analysts who wish to use the CPS to study health insurance over time will need to explicitly account for the measurement break. Developing practical guidance for handling the break in series was beyond the scope of this paper, but is an important area of future work. However, until guidance can be developed, our results align with the official Census Bureau recommendation that analysts should not compare estimates before and after the implementation of the revised instrument.
Supporting information
Acknowledgments
Joint Acknowledgment/Disclosure Statement: Any views expressed are those of the authors and not necessarily those of the U.S. Census Bureau. We thank the staff at the U.S. Census Bureau for their assistance with data collection and staff at the State Health Access Data Assistance Center and the Agency for Healthcare Quality and Research for many helpful conversations and comments. We also thank the two anonymous reviewers.
Disclosures: None.
Disclaimers: None.
References
- Blewett, L. A. , and Davern M. E.. 2006. “Meeting the Need for State‐Level Estimates of Health Insurance Coverage: Use of State and Federal Survey Data.” Health Services Research 41 (3 Pt 1): 946–75. doi:10.1111/j.1475‐6773.2006.00543.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boudreaux, M. H. , Fried B., Turner J., and Call K. T.. 2013. SHADAC Analysis of the Survey of Health Insurance and Program Participation. State Health Assistance Data Center; [accessed on July 2, 2014]. Available at http://www.shadac.org/files/shadac/publications/SHIPP_final_report.pdf [Google Scholar]
- Brault, M. W . 2014. “Non‐Response Bias in the 2013 CPS ASEC Content Test.” Proceedings of the 2013 Federal Committee on Statistical Methodology (FCSM) Research Conference. Washington, DC. [Google Scholar]
- Call, K. T. , Davidson G., Davern M. E., and Nyman R. M.. 2008. “Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence.” Health Services Research 43 (3): 901–14. doi:10.1111/j.1475‐6773.2007.00808.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davern, M. , Beebe T. J., Blewett L. A., and Call K. T.. 2003. “Recent Changes to the Current Population Survey: Sample Expansion, Health Insurance Verification, and State Health Insurance Coverage Estimates.” Public Opinion Quarterly 67 (4): 603–26. doi:10.1086/379088. [Google Scholar]
- Davern, M. , Call K. T., Ziegenfuss J., Davidson G., Beebe T., and Blewett L.. 2008. “Validating Health Insurance Coverage Survey Estimates: A Comparison of Self‐Reported Coverage and Administrative Data Records.” Public Opinion Quarterly 72 (2): 241–59. [Google Scholar]
- Eberly, T. , Pohl M., and Davis S.. 2008. Undercounting Medicaid Enrollment in Maryland: Testing the Accuracy of the Current Population Survey. Population Research and Policy Review; [accessed on April 2, 2015]. Available at http://www.springerlink.com/content/d56459733gr81vu2/ [Google Scholar]
- Fay, R. E. , and Train G. F.. 1995. “Aspects of Survey and Model‐Based Postcensal Estimation of Income and Poverty Characteristics for States and Counties.” Proceedings of the American Statistical Association, Government Statistics Section, 154–159. [Google Scholar]
- Hess, J. , Moore J. C., Pascale J., Rothgeb J. M., and Keeley C.. 2001. “The Effects of Person‐Level vs. Household‐level Questionnaire Design on Survey Estimates and Data Quality.” Public Opinion Quarterly 65: 574–84. doi:10.1086/323580. [Google Scholar]
- Hornick, D. 2014. “The 2013 Annual Social Economic Supplement Health Insurance Questionnaire Test: The Sample Design.” Proceedings of the 2013 Federal Committee on Statistical Methodology (FCSM) Research Conference Washington, DC. [Google Scholar]
- Klerman, J. A. , Davern M. E., Call K. T., Lynch V. A., and Ringel J.. 2009. “Understanding the Current Population Survey's Insurance Estimates and the Medicaid ‘Undercount’.” Health Affairs 28 (6): w991–w1001. doi:10.1377/hlthaff.28.6.w991. [DOI] [PubMed] [Google Scholar]
- Lewis, K. , Ellwood M. R., and Czajka J. L.. 1998. Counting the Uninsured: A Review of the Literature. Washington, DC: The Urban Institute. [Google Scholar]
- National Health Interview Survey Early Release Program, in collaboration with U.S. Census Bureau Social, Economic and Housing Statistics Division . 2014. “Comparison of the Prevalence of Uninsured Persons from the National Health Interview Survey and the Current Population Survey, January–April 2014” [accessed on April 2, 2015]. Available at http://www.cdc.gov/nchs/data/nhis/health_insurance/NCHS_CPS_Comparison092014.pdf
- Pascale, J. 2001a. Measuring Private and Public Health Coverage: Results from a Split‐Ballot Experiment on Order Effects. Presented at the American Association for Public Opinion Research Conference, Montreal: [accessed on April 2, 2015]. Available at http://www.amstat.org/sections/SRMS/Proceedings/ [Google Scholar]
- Pascale, J. 2001b. “Methodological Issues in Measuring the Uninsured” In Proceedings of the Seventh Conference on Health Survey Research Methods, edited by Cynamon M. L., and Kulka R. A., pp. 167–73. Hyattsville, MD: DHHS Publication No. (PHS) 01‐1013 [accessed on April 2, 2015]. Available at http://www.cdc.gov/nchs/data/slaits/conf07.pdf [Google Scholar]
- Pascale, J. 2004. “Medicaid and Medicare Reporting in Surveys: An Experiment on Order Effects and Program Definitions” [accessed on April 2, 2015]. Available at http://www.amstat.org/sections/srms/Proceedings/ [Google Scholar]
- Pascale, J. 2008. “Measurement Error in Health Insurance Reporting.” Inquiry 45 (4): 422–37. [DOI] [PubMed] [Google Scholar]
- Pascale, J. 2009. “Findings from a Pretest of a New Approach to Measuring Health Insurance in the Current Population Survey” [accessed on April 2, 2015]. Available at https://fcsm.sites.usa.gov/files/2014/05/2009FCSM_Pascale_VIII-A.pdf [Google Scholar]
- Pascale, J. 2014. Modernizing a Major Federal Government Survey: A Review of the Redesign of the Current Population Survey Health Insurance Questions. Under review at the Journal of Official Statistics, Statistics Sweden. [Google Scholar]
- Pascale, J. , Roemer M. I., and Resnick D. M.. 2009. “Medicaid Underreporting in the CPS: Results from a Record Check Study.” Public Opinion Quarterly 73: 497–520. [Google Scholar]
- Pascale, J. , Rodean J., Leeman J. A., Cosenza C. A., and Schoua‐Glusberg A.. 2013. “Preparing to Measure Health Coverage in Federal Surveys Post‐Reform: Lessons from Massachusetts.” Inquiry: The Journal of Health Care Organization, Provision, and Financing 50 (2): 106–23. doi:10.1177/0046958013513679. [DOI] [PubMed] [Google Scholar]
- Resnick, D. 2013. A Comparison of Health Insurance Reporting Accuracy In the CPS, ACS and CPS Redesign from the 2010 Survey of Health Insurance and Program Participation (SHIPP) Experiment. Report submitted from the Urban Institute to David S. Johnson and Charles Nelson U.S. Department of Commerce, U.S. Census Bureau, and Don Oellerich, U.S. Department of Health and Human Services, Assistant Secretary for Planning and Evaluation. [Google Scholar]
- State Health Access Data Assistance Center [SHADAC] . 2013. Comparing Federal Government Surveys that Count the Uninsured. Minneapolis, MN: State Health Access Data Assistance Center. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.