Skip to main content
Health Services Research logoLink to Health Services Research
. 2011 Feb;46(1 Pt 1):210–231. doi: 10.1111/j.1475-6773.2010.01193.x

Counting Uninsurance and Means-Tested Coverage in the American Community Survey: A Comparison to the Current Population Survey

Michel Boudreaux 1, Jeanette Y Ziegenfuss 2, Peter Graven 3, Michael Davern 4, Lynn A Blewett 3
PMCID: PMC3034271  PMID: 21029089

Abstract

Objective

To compare health insurance coverage estimates from the American Community Survey (ACS) to the Current Population Survey (CPS-ASEC).

Data Sources/Study Setting

The 2008 ACS and CPS-ASEC, 2009.

Study Design

We compare age-specific national rates for all coverage types and state-level rates of uninsurance and means-tested coverage. We assess differences using t-tests and p-values, which are reported at <.05, <.01, and <.001. An F-test determines whether differences significantly varied by state.

Principal Findings

Despite substantial design differences, we find only modest differences in coverage estimates between the surveys. National direct purchase and state-level means-tested coverage levels for children show the largest differences.

Conclusions

We suggest that the ACS is well poised to become a useful tool to health services researchers and policy analysts, but that further study is needed to identify sources of error and to quantify its bias.

Keywords: Health insurance coverage, state health policy, current population survey, American community survey


In 2008, the U.S. Census Bureau began fielding a health insurance coverage question in the American Community Survey (ACS), establishing a valuable new resource for health services researchers and policy makers. Survey estimates of health insurance coverage are important to state and national policy makers who develop programs for their communities, to analysts who estimate the fiscal impact of new programs, and to researchers who work to identify the correlates and consequences of coverage status. Because the Current Population Survey (CPS-ASEC) produces annual state-level coverage estimates for all age groups, it has historically been the primary data source for such activities (Blewett and Davern 2006). Now, data users have an alternative in the ACS. Not surprisingly, the two surveys have different goals and methods (Davern et al. 2009). As such, they have different strengths and weaknesses for the purposes of health policy analysis. This paper focuses on the ACS, describing its relative advantages and disadvantages for estimating health insurance coverage levels compared with the CPS-ASEC. We provide side-by-side estimates from both surveys, offer conjectures for why the surveys compare the way they do, and give ideas for further investigation.

The CPS-ASEC was chosen as a point of comparison for two reasons. First, it is the only federal survey, other than the ACS, to produce annual state-level estimates of health insurance coverage for all age groups and there is no gold standard for uninsurance. Second, the CPS-ASEC has historically served as the survey of record for the distribution of public program funds (Blewett and Davern 2007). Data users and stake-holders have an obvious interest in how the ACS compares to the CPS.

Advantages of the ACS

The primary measurement advantage of the ACS is that the health insurance item asks about health insurance status at the time of the survey. This method avoids recall bias that can occur when respondents are asked to remember their coverage from a previous time period.

In the CPS-ASEC, respondents are asked in February–April to recall any coverage they had in the previous calendar year, a recall period that can span as much as 16 months and is known to exert downward bias on the count of people with coverage (Klerman et al. 2009). The all-year estimates of uninsurance from the CPS-ASEC are substantially larger than all-year estimates from other surveys (including the National Health Interview Survey [NHIS] and MEPS), but they are similar to other surveys' point-in-time (PIT) results (State Health Access Data Assistance Center 2009). In calendar year 2007, the CPS-ASEC counted 15 million more uninsured than the NHIS's all-year estimate, but only 1.7 million more than its PIT estimate (State Health Access Data Assistance Center 2009). The proximity of the CPS-ASEC estimates to PIT estimates has led some to posit that the CPS-ASEC functions as a PIT estimate (Swartz 1986; CBO 2003;). However, recent research of Medicaid estimates in the CPS-ASEC suggest that while the CPS-ASEC estimates look like a PIT estimate, and some respondents respond as such, many more respondents correctly identify the reference period as all year but are biased by the lengthy recall period (Davern et al. 2009; Klerman et al. 2009;). The PIT measure in the ACS theoretically avoids the bias created by an item with a substantial recall period.

The ACS benefits from a sample size that is 30 times that of CPS (Davern et al. 2009). This suggests that single-year ACS estimates at the state and substate level will be more reliable than those from the CPS-ASEC. However, comparing sample sizes does not account for differences in design effects and thus sample size is only a rough gauge of precision. The CPS-ASEC is designed to be state-representative, but its sample size does not support reliable single-year comparisons for many subgroups across states or time. The Census Bureau recommends averaging estimates from multiple single-year files for such purposes (U.S. Census Bureau 2009d). The Census Bureau does not offer clear guidance for using single versus multiyear ACS data. However, the relative size of its state samples suggests that single-year estimates will be sufficient for most research tasks.

The ACS has a better imputation routine for assigning values to people with item-missing data than does the CPS-ASEC. The CPS-ASEC routine is known to produce biases; that is, it produces a smaller number of privately insured and a higher number of uninsured than it should (Davern et al. 2004, 2007). This bias exists in part because the CPS-ASEC routine does not accurately assign dependent coverage. In addition, due to its small sample, it does not consider state. This inappropriately shrinks state estimates toward the national average (Davern et al. 2004). These problems are exacerbated by the high rate of unit nonresponse to the health insurance items, which occur in a supplement to the regular monthly instrument. In contrast, the ACS uses a more accurate procedure for assigning coverage to dependents and its routine is implemented independently in each state. While the Census Bureau is in the process of improving the CPS routine, it does not expect to release data from this new method until 2011 and will likely not retroactively adjust and release prior data years (J. Turner and C. Nelson, personal communication, 2009).

Disadvantages of the ACS

Despite these advantages, the ACS has important disadvantages that must be acknowledged. The ACS gathers information about health from a single question stem that uses a forced choice format in which respondents are asked to endorse “yes” or “no” for each of seven types of coverage or supply a verbatim response.1 All means-tested programs are identified with one item. It reads: “Medicaid, Medical Assistance, or any kind of government-assistance program for those with low incomes or a disability.” This item must pick up a vast array of public programs, including CHIP and state-specific programs (such coverage will be referred to as “means-tested” throughout this paper). This method has two limitations. First, data users are unable to distinguish specific programs such as CHIP. Secondly, providing a single response option for a diverse concept like means-tested coverage likely picks up fewer cases than an exhaustive list would. In contrast to the ACS, the CPS-ASEC has a series of health insurance questions that ask specifically about Medicaid, CHIP, and other state programs.

Because the ACS is primarily delivered by mail (but supplemented by telephone and in-person interviews), questions are not tailored to specific subgroups or states in real time. Such tailoring is achieved in the CPS-ASEC (which is delivered either in person or by phone) through skip-patterns and by replacing Medicaid/CHIP with its local name. This deficiency in the ACS has implications. Previous experience suggests that the inclusion of state-specific names increases the rate (and accuracy) of Medicaid coverage (Eberly, Pohl, and Davis 2009). The absence of skip patterns means that subgroups, such as children, cannot be asked targeted questions. It also means that respondents who do not endorse any coverage cannot be asked follow-up questions and specifically verify that they are uninsured. This verification procedure was found to lower the frequency of false-negative coverage reports in the CPS-ASEC and other surveys (Rajan, Zuckerman, and Brennan 2000; Nelson and Mills 2001;).

The CPS-ASEC implements a logical coverage edit, a data-processing technique that reduces the rate of false-negative reports and increases estimates of coverage. Logical editing deterministically assigns public coverage types to persons who do not report public coverage, but it can be identified as likely enrollees based on other information obtained in the survey. For example, people reporting Supplementary Security Income (SSI) are assigned Medicaid coverage if they live in a state that uses SSI rules for Medicaid determination in SSI cases. Several such rules are employed in the CPS-ASEC and affect Medicare, Medicaid, and Military/Veterans Affairs (VA) coverage (Lynch, Boudreaux, and Davern 2010). This data-editing process may shift the balance of errors from false negatives to false positives at the individual level. To our knowledge, no study has carefully examined the quality of these edits. Nonetheless, the Census Bureau in consultation with outside experts has determined that, on balance, logical editing reduces overall response error.

Logical editing was not applied to the 2008 ACS, but it will be applied in future years. Census Bureau officials expect that applying the edit in the ACS will decrease the rate of uninsurance by 0.5 percent and increase the rate of means-tested coverage by 1.4 percent (Lynch, Boudreaux, and Davern 2010). We view the lack of an edit for the 2008 ACS as a disadvantage because at aggregate levels public insurance estimates from all surveys are nearly always biased downward.

Finally, the ACS lacks a lengthy time-series, and it does not have many important covariates such as health status. Both of these deficiencies will limit researchers who are interested in understanding factors that lead to uninsurance or the consequences of being uninsured.

Relative Coverage Levels

Despite these substantial differences, Turner, Boudreaux, and Lynch (2009) found that the ACS produced results strikingly similar to the CPS-ASEC. The rate of uninsurance in the ACS was 15.1 compared with 15.4 in the CPS-ASEC. Given that the CPS-ASEC produces uninsurance estimates that are slightly higher than PIT estimates (State Health Access Data Assistance Center 2009), Turner's findings suggested that the ACS uninsurance estimate was not substantially different from other PIT estimates. The ACS's rate of means-tested coverage was 13.4, whereas the CPS-ASEC rate was 14.1.

This paper extends Turner and colleagues's previous work in important ways. We compare national and state-level health insurance estimates from the ACS and CPS-ASEC for calendar year 2008. We remove the logically edited values in the CPS-ASEC so that our comparisons are more reflective of survey design characteristics than data-processing differences and so that our results can be extrapolated to future years with more confidence.

We present state-level comparisons for methodological and substantial reasons. On the measurement side, uninsurance and means-tested coverage may vary differently by state due to survey design differences (e.g., state-name fills or imputation procedures), and demographic and policy heterogeneity at the state level. A national comparison may mask important deaggregated differences at the state level. Further, state-level estimates are important when assessing the impact of the various state-implemented programs such as Medicaid and CHIP. For many states, the CPS-ASEC and now the ACS are the only data available to estimate rates of uninsurance by age, geographic area, and income level. This information is often needed by states to target programs and policies and to monitor implementation of access expansions.

METHODS

Data

The ACS is conducted every year and replaced the decennial census long form in 2010. The sample consists of approximately 3 million households and group quarters. Sampled households and persons in group quarters are required to respond under law and a sequential multimode design is used to maximize the response rate. Housing unit respondents are first asked to return the written questionnaire by mail. All mail nonresponses are followed up with a computer-assisted telephone interview (CATI) and one in three CATI nonrespondents are followed up with an in-person interview (CAPI) (U.S. Census Bureau 2009a). The response rate (after all modes have been administered) released by Census is calculated as the ratio of the weighted estimate of completed interviews to the weighted estimate of eligible interviews and was estimated to be 98 percent in 2008 (U.S. Census Bureau 2009f). However, this rate does not include CATI nonrespondents who are not sampled for CAPI. Griffin and Hughes (2010) report that 30 percent of the initial sample does not respond to mail or CATI and is not followed up by CAPI.

Here, we use the ACS Public Use Microdata Sample file (PUMS), a sample of the full ACS data that have been processed to protect confidentiality. The PUMS sample of the full file is designed to be a 1 percent sample of the U.S. population (U.S. Census Bureau 2009c) and includes approximately 65 percent of the cases from the full file (based on authors' calculations).

We also use the publicly available CPS-ASEC, 2009, reflecting health insurance coverage estimates for 2008. The CPS is a labor survey of the U.S. civilian noninstitutional population that is conducted each month in person and by phone. Health insurance and other demographic items supplement the core questionnaire in February–April of each year (the ASEC). Approximately 97,000 households are included in the supplement sample and the response rate to the CPS-ASEC in 2009 was 86.5 percent. Like the ACS PUMS, the public use CPS-ASEC undergoes data-disclosure processing, but unlike the ACS, CPS microdata users have access to the full sample (U.S. Census Bureau 2009e).

Health Insurance

We created three aggregated measures of coverage. Public coverage was defined as Medicare, means-tested coverage (including Medicaid, SCHIP, and state programs), or Department of VA coverage. Private coverage was defined as employer-sponsored insurance (ESI), directly purchased, or TRICARE/other military coverage. Uninsured was defined as lacking any coverage. Consistent with other studies and Census classification, coverage by Indian Health Service alone was considered uninsured. Respondents are allowed to indicate multiple sources of coverage in both surveys. For each type of insurance, we report the percentage of people with that coverage type (regardless of whether it is alone or in combination). We also report the percentage of people with both private and public coverage.

Removing the CPS-ASEC Logical Edit

The CPS-ASEC contains indicator variables that flag when a coverage value was obtained from logical editing. These edits are applied to Medicare, Medicaid, Tricare, VA, Indian Health Service, and “other” government coverage. To compare similar data with respect to use of edits, in light of the lack of a coverage edit in the 2008 ACS, we removed specific coverage types for observations that were indicated to be logically edited in the CPS-ASEC. This increased the weighted estimate of uninsurance (among the nonelderly) from 17.3 to 18.3 percent and reduced the estimate of means-tested coverage from 14.9 to 13.1 percent.

Universe

We restricted the ACS sample to the U.S. Civilian Noninstitutional population so that it would be comparable with CPS-ASEC. We focus on 0–64-year-olds because the elderly are nearly universally covered. Readers should also note that the surveys are controlled to two different populations and use different weighting schemes. The ACS is controlled to the 2008 population estimates (as of July) and the CPS-ASEC is controlled to the 2009 population estimates (as of March). The ACS estimates 261.4 million nonelderly in the U.S. civilian noninstitutional population, whereas the CPS-ASEC estimates 263.7 million (Table 1).

Table 1.

Health Insurance Coverage by Age, U.S. Civilian, Noninstitutional Population, 2008 ACS PUMS, and CPS-ASEC, 2009

ACS CPS-ASEC ACS–CPS



All 0–18 19–64 All 0–18 19–64 Differences







261,412 78,429 182,982 263,695 78,683 185,012 All 0–18 19–64









Population (in thousands) % SE % SE % SE % SE % SE % SE Difference Significance Difference Significance Difference Significance
Uninsured 17.0 0.06 10.4 0.07 19.9 0.07 18.3 0.15 11.4 0.21 21.3 0.18 −1.3 *** −1.0 *** −1.4 ***
Public 16.0 0.05 27.5 0.11 11.0 0.04 15.9 0.14 28.5 0.32 10.6 0.12 0.0 −1.0 ** 0.4 ***
 Means-tested 13.4 0.05 27.0 0.11 7.6 0.03 13.1 0.14 27.8 0.32 6.8 0.10 0.3 * −0.8 * 0.7 ***
 Medicare 2.5 0.01 0.7 0.02 3.3 0.02 2.9 0.06 0.8 0.07 3.8 0.08 −0.4 *** −0.2 ** −0.5 ***
 VA 1.2 0.01 0.1 0.01 1.7 0.01 0.9 0.04 0.4 0.05 1.0 0.05 0.3 *** −0.3 *** 0.6 ***
Private 69.7 0.09 64.4 0.13 71.9 0.08 69.0 0.21 64.8 0.31 70.7 0.21 0.7 ** −0.4 1.2 ***
 ESI 61.4 0.08 56.4 0.12 63.6 0.08 61.9 0.22 58.7 0.30 63.2 0.23 −0.5 −2.3 *** 0.3
 Direct purchase 10.6 0.03 9.4 0.06 11.2 0.03 6.3 0.10 5.1 0.16 6.8 0.10 4.3 *** 4.3 *** 4.3 ***
 TRICARE/military 2.0 0.02 2.3 0.03 1.9 0.01 2.5 0.12 2.6 0.17 2.4 0.11 −0.5 *** −0.3 −0.6 ***
Public and private coverage 2.6 0.02 2.3 0.03 2.8 0.02 3.1 0.07 4.6 0.14 2.4 0.06 −0.4 *** −2.3 *** 0.4 ***

Notes. Due to differences in population control totals and weighting, the population counts do not perfectly align. Logically edited values in the CPS-ASEC have been recoded to their original values.

*

p<.05

**

p<.001

***

p<.001.

ACS, American Community Survey; CPS, Current Population Survey; ESI, employer-sponsored insurance; SE, standard error expressed as a %; Significance, significance test for the difference of ACS and CPS; VA, Veterans Affairs.

Source: U.S. Census Bureau, 2008 ACS Public Use Microdata Sample and the CPS-ASEC, 2009.

Analysis

We report rates for each coverage type at the national level. At the state level, we report rates of uninsurance and means-tested coverage. Significant differences were determined using a t-test and p-values are reported at <.05, <.01, and <.001. An F-test was used to test whether differences significantly varied by state. To quantify the relative statistical reliability of the surveys, we report state-specific observation counts and the ratio of the ACS to the CPS-ASEC standard errors for the percentage of uninsured. We repeat the comparison for single-year, 2-year, and 3-year averaged CPS data. To account for the complex sample designs, standard errors were calculated in both surveys using a replicate weight method (U.S. Census Bureau 2009c), and standard errors for averaged CPS-ASEC data were computed using the Census Bureau's recommended procedure (U.S. Census Bureau 2009b,c;).

RESULTS

National Rates by Survey and Age

Table 1 describes coverage rates for the nonelderly. The rate of uninsurance for all nonelderly was 17.0 percent in the ACS and 18.3 percent in the CPS-ASEC, a significant absolute difference of 1.3 percentage points. Both surveys had similar (and not significantly different) levels of public coverage (ACS: 16.0 percent; CPS: 15.9 percent). While differences for specific public coverage types were significant, they did not exceed 0.4 on an absolute scale. In the ACS, the private coverage rate of 69.7 percent was 0.7 points higher (p<.01) than the CPS-ASEC. The difference was not significant for ESI, but there was a substantial difference for direct purchase: 10.6 percent in the ACS compared with 6.8 percent in the CPS-ASEC (p<.001).

In general, differences across the surveys for children were small. A smaller percentage of children had public coverage in the ACS than the CPS-ACEC (27.5 versus 28.5 percent; p<.01). There was no evidence of differences for private coverage, but the ACS found significantly less ESI (by 2.3 points) and significantly more direct purchase (by 4.3 points). The rate of both private and public coverage was significantly smaller in the ACS (2.3 versus 4.6 percent; p<.001). Higher levels of double coverage in the CPS-ASEC contributed to less overall coverage in the CPS-ASEC, but greater levels of any public or any private coverage.

A different pattern emerged for adults. The percentage of nonelderly adults with public coverage was higher by 0.4 percentage points in the ACS (p<.001). Adults in the ACS had higher levels of private coverage (71.9 versus 70.7 percent; p<.001), higher levels of direct coverage (11.2 versus 6.8 percent; p<.001), but the surveys were indistinguishable on ESI. The ACS found slightly more private and public coverage for adults, an absolute difference of 0.4 points (p<.001).

State-Level Comparison of ACS and CPS-ASEC

For all outcomes and age groups the F-test confirmed significant variation in the differences across state. Table 2 present uninsurance rates by state from both surveys, for children and adults separately. Absolute differences for children were small, ranging from 0.0 in Washington to 5.2 in Oklahoma, and in most states (38) the ACS counted fewer uninsured, although not all differences were significant. The average point difference between surveys was 0.9 points (SD=2.1) and 12 states showed evidence of statistical difference between surveys. However, the p-values were not corrected for multiple comparisons.

Table 2.

Comparison of ACS and CPS Uninsurance Rates for Children and Adults by State, U.S. Civilian, Noninstitutional Population

Children (0–18) Adults (19–64)


ACS CPS ACS CPS




% SE % SE % Difference Significance SE % SE Difference Significance
United States 10.4 0.07 11.4 0.2 −1.0 *** 19.9 0.07 21.3 0.18 −1.4 ***
Alabama 8.6 0.48 5.1 1.46 3.4 * 19.3 0.38 18.9 1.73 0.4
Alaska 13.7 1.45 16.0 2.44 −2.3 25.9 1.18 25.4 1.98 0.5
Arizona 16.9 0.50 18.6 2.88 −1.7 23.3 0.35 25.4 1.86 −2.2
Arkansas 9.3 0.66 10.6 1.39 −1.3 25.5 0.55 25.8 1.28 −0.3
California 11.5 0.18 12.4 0.64 −0.9 23.4 0.13 26 0.65 −2.6 ***
Colorado 14.2 0.62 13.8 1.30 0.3 20.6 0.34 20.6 0.95 0.0
Connecticut 5.3 0.42 5.8 0.94 −0.5 11.9 0.37 14 0.94 −2.2 *
Delaware 9.4 1.24 10.8 1.82 −1.4 13.5 0.73 14.2 1.11 −0.8
District of Columbia 3.7 0.82 8.6 1.41 −4.8 ** 9.9 0.57 13 1.12 −3.1 *
Florida 18.4 0.30 18.5 1.41 −0.2 27.4 0.19 27.3 0.94 0.2
Georgia 12.0 0.37 12.5 1.46 −0.6 24.9 0.25 24 1.26 0.9
Hawaii 3.5 0.49 6.0 1.32 −2.5 9.4 0.54 10.6 0.9 −1.2
Idaho 13.7 0.92 10.6 2.01 3.2 22.7 0.70 22.9 2.07 −0.1
Illinois 6.2 0.26 8.3 0.98 −2.2 * 17.7 0.24 18 1 −0.3
Indiana 10.0 0.48 7.1 1.05 2.9 * 17.9 0.30 18.3 1.5 −0.3
Iowa 5.2 0.54 5.7 0.82 −0.5 12.0 0.49 13.5 0.82 −1.5
Kansas 9.3 0.63 12.4 1.94 −3.1 16.5 0.49 15.2 1.26 1.3
Kentucky 6.8 0.38 12.0 1.32 −5.1 *** 19.7 0.34 22.9 1.37 −3.2 *
Louisiana 8.2 0.45 12.5 1.41 −4.2 ** 25.4 0.38 28.4 1.52 −3.0
Maine 6.3 0.71 7.2 1.15 −0.9 14.6 0.64 15.6 1.04 −1.0
Maryland 5.7 0.33 7.4 1.14 −1.7 15.1 0.28 16.9 1.13 −1.8
Massachusetts 2.1 0.20 4.0 0.84 −1.9 * 5.6 0.19 7.9 0.95 −2.3 *
Michigan 5.8 0.28 6.5 0.85 −0.7 16.1 0.25 17.7 1.03 −1.6
Minnesota 6.4 0.43 8.2 1.05 −1.8 11.0 0.27 11.5 1.09 −0.5
Mississippi 13.9 0.75 14.5 2.62 −0.6 23.6 0.54 26 2.17 −2.5
Missouri 7.5 0.34 8.0 1.09 −0.5 17.9 0.33 19.3 1.19 −1.4
Montana 16.0 1.26 12.1 2.75 3.9 23.9 0.95 23.1 1.89 0.9
Nebraska 7.6 0.67 10.5 1.19 −2.9 * 14.5 0.65 15.7 1.09 −1.3
Nevada 21.6 0.78 20.7 2.38 0.9 25.5 0.54 22.7 1.29 2.8 *
New Hampshire 5.5 0.63 4.8 0.88 0.7 15.0 0.66 14.7 0.92 0.3
New Jersey 7.7 0.31 12.3 1.34 −4.6 *** 16.6 0.25 18.4 1.09 −1.8
New Mexico 14.1 0.73 17.1 2.60 −3.0 28.9 0.63 32.7 2.8 −3.8
New York 6.3 0.18 8.6 0.80 −2.3 ** 16.1 0.18 20.1 0.75 −4.0 ***
North Carolina 10.6 0.33 10.9 1.62 −0.3 21.4 0.24 21.7 1.14 −0.2
North Dakota 7.6 1.28 9.1 1.95 −1.4 12.5 0.88 16.1 1.69 −3.6
Ohio 7.6 0.29 7.1 0.85 0.5 15.9 0.20 17 0.99 −1.0
Oklahoma 12.7 0.44 7.5 1.16 5.2 *** 26.1 0.43 21.7 1.44 4.4 **
Oregon 12.8 0.68 12.1 1.83 0.6 21.1 0.38 22.3 1.75 −1.1
Pennsylvania 6.7 0.34 7.8 1.38 −1.1 12.8 0.19 13.7 0.9 −0.9
Rhode Island 5.9 0.70 9.8 1.44 −3.9 * 14.2 0.67 16.4 1.22 −2.3
South Carolina 12.4 0.51 15.6 1.98 −3.2 22.4 0.38 21.9 1.44 0.4
South Dakota 10.0 1.17 11.4 2.23 −1.4 14.8 0.91 17.3 1.63 −2.6
Tennessee 7.8 0.38 10.4 1.57 −2.7 18.8 0.33 21.4 1.58 −2.5
Texas 18.3 0.26 20.1 0.95 −1.7 30.4 0.17 33.4 0.92 −3.0 **
Utah 12.8 0.69 10.1 1.76 2.7 18.5 0.47 17.3 1.59 1.2
Vermont 4.4 0.89 5.0 1.10 −0.6 12.3 0.87 13.7 1.19 −1.5
Virginia 7.9 0.33 8.0 1.22 0.0 15.5 0.26 17 0.84 −1.4
Washington 9.0 0.43 9.0 1.30 0.0 17.2 0.32 17.3 1.16 −0.1
West Virginia 7.2 0.65 8.0 1.53 −0.8 22.1 0.64 23.1 1.56 −1.0
Wisconsin 5.3 0.37 6.4 1.29 −1.1 12.1 0.32 13.3 1.12 −1.2
Wyoming 9.8 1.35 9.6 2.13 0.2 19.1 0.88 19.6 1.89 −0.5
State Ave. (SD) −0.9 (2.1) −1.0 (1.6)
F-test *** ***

Notes. Effort was made to harmonize the populations represented by the two surveys; however, some differences remain due to differences in population control totals. Logically edited values in the CPS-ASEC have been recoded to their original values.

*

p<.05

**

p<.001

***

p<.001.

ACS, American Community Survey; CPS, Current Population Survey; SE, standard error expressed as a %; Significance, significance test for the difference of ACS and CPS.

Source: U.S. Census Bureau 2008 American Community Survey, Public Use Microdata Sample and CPS-ASEC 2009.

Results for nonelderly adults were nearly identical to those of children, with the ACS finding slightly less uninsurance on average. The smallest absolute difference was found in Washington (0.1) and the highest in Oklahoma (4.4); the average difference was 1.0 (SD=1.6) and nine states showed evidence of statistical difference.

Table 3 shows rates of any means-tested coverage. On average, the ACS found less means-tested coverage for children, the average absolute difference was 0.4 (SD=3.5). The largest difference was for Hawaii (10 points lower in the ACS), the lowest was Idaho (no difference). Unlike Table 2, there was no consistent pattern in the direction of the difference (the ACS rate was higher in 23 states and lower in 28). In New Mexico and Louisiana, the ACS point estimate was 9–10 points higher than the CPS-ASEC. Despite these differences, there was little evidence of statistical difference. Only one state (HI) met our strictest test (p<.001) and five others met looser tests (AK, CA, ID, LA, NM).

Table 3.

Comparison of ACS and CPS Means-Tested Program Rates for Children and Adults by State, U.S. Civilian, Noninstitutional Population

Children (0–18) Adults (19–64)


ACS CPS ACS CPS




% SE % SE Difference Significance % SE % SE Difference Significance
United States 27.0 0.11 27.8 0.32 −0.8 * 7.6 0.03 6.8 0.10 0.7 ***
Alabama 30.7 0.73 32.7 3.2 −2.0 7.2 0.22 7.0 1.06 0.2
Alaska 23.8 1.79 18.0 2.2 5.8 * 5.3 0.44 6.3 0.92 −0.9
Arizona 28.2 0.68 31.5 3.0 −3.4 10.4 0.27 9.6 0.94 0.8
Arkansas 42.9 1.02 41.9 3.6 1.0 7.6 0.26 4.5 0.65 3.1 ***
California 29.5 0.31 31.8 1.0 −2.3 * 8.6 0.10 7.9 0.37 0.8 *
Colorado 17.8 0.63 17.0 1.8 0.9 4.6 0.19 4.1 0.49 0.5
Connecticut 21.4 0.90 24.5 2.0 −3.1 7.9 0.29 7.5 0.67 0.4
Delaware 23.8 1.80 21.0 2.3 2.8 9.9 0.71 8.2 0.85 1.7
District of Columbia 42.5 2.41 40.6 2.8 1.9 16.7 0.65 13.8 1.07 3.0 *
Florida 23.9 0.36 22.4 1.7 1.5 5.5 0.10 4.9 0.47 0.5
Georgia 30.7 0.39 28.3 2.1 2.3 5.1 0.14 3.6 0.43 1.6 ***
Hawaii 19.0 1.38 28.9 2.6 −9.9 *** 6.5 0.40 7.2 0.91 −0.7
Idaho 23.8 1.15 23.7 2.7 0.0 4.3 0.33 3.3 0.74 1.0
Illinois 29.0 0.37 28.2 1.9 0.8 7.8 0.15 6.8 0.49 0.9
Indiana 24.7 0.58 32.0 3.1 −7.3 * 6.5 0.16 6.0 0.69 0.5
Iowa 24.2 1.00 24.9 2.1 −0.6 6.7 0.31 5.7 0.76 1.0
Kansas 20.9 0.88 24.2 2.7 −3.3 4.2 0.22 4.0 0.69 0.2
Kentucky 32.5 0.76 32.5 2.5 −0.1 8.7 0.23 6.7 0.85 2.0 *
Louisiana 41.7 0.82 32.6 3.0 9.2 ** 7.8 0.25 6.1 0.82 1.6
Maine 32.9 1.55 31.2 2.2 1.7 15.7 0.72 13.1 1.05 2.6 *
Maryland 21.9 0.55 21.5 1.6 0.4 4.9 0.18 4.0 0.53 0.9
Massachusetts 22.8 0.52 23.9 2.4 −1.1 12.8 0.25 13.7 1.07 −0.9
Michigan 29.5 0.54 27.6 2.0 1.9 9.9 0.15 8.0 0.73 1.9 *
Minnesota 16.9 0.69 21.0 3.2 −4.1 8.1 0.24 9.6 1.98 −1.5
Mississippi 38.5 0.95 38.6 3.4 −0.1 9.1 0.32 8.5 1.19 0.7
Missouri 28.0 0.59 25.0 2.8 3.0 7.0 0.19 6.3 0.88 0.7
Montana 19.8 1.36 25.3 3.7 −5.5 5.3 0.45 5.6 0.91 −0.3
Nebraska 19.6 1.14 23.8 3.2 −4.1 4.6 0.37 5.3 0.48 −0.7
Nevada 12.8 0.84 13.8 2.1 −1.0 3.3 0.21 3.6 0.49 −0.2
New Hampshire 19.2 1.06 19.8 1.7 −0.6 4.4 0.37 3.6 0.50 0.8
New Jersey 19.2 0.46 17.4 2.0 1.8 6.0 0.16 4.6 0.55 1.4 *
New Mexico 41.4 1.34 31.7 3.9 9.8 * 9.2 0.37 7.8 0.90 1.4
New York 29.5 0.38 32.6 1.6 −3.1 12.5 0.15 11.8 0.56 0.8
North Carolina 30.4 0.48 28.8 2.2 1.6 6.6 0.16 6.6 0.81 −0.1
North Dakota 15.9 1.77 18.0 3.5 −2.1 5.0 0.52 4.6 0.69 0.4
Ohio 24.8 0.47 26.7 1.8 −1.9 8.1 0.14 7.1 0.61 1.0
Oklahoma 33.5 1.01 35.5 2.8 −2.0 6.2 0.25 4.9 0.78 1.2
Oregon 20.3 0.81 24.9 3.1 −4.6 5.5 0.22 5.1 0.53 0.3
Pennsylvania 25.8 0.50 26.3 1.9 −0.5 8.1 0.16 7.2 0.55 0.9
Rhode Island 22.9 1.43 27.4 2.4 −4.4 8.7 0.45 8.6 0.80 0.1
South Carolina 28.1 0.63 24.3 2.2 3.9 7.3 0.23 5.6 0.77 1.7 *
South Dakota 27.8 2.22 26.0 2.4 1.8 5.3 0.55 4.2 0.53 1.1
Tennessee 29.5 0.58 34.5 2.9 −5.0 9.3 0.19 10.0 0.99 −0.8
Texas 29.2 0.24 31.6 1.5 −2.4 4.8 0.09 4.0 0.30 0.8 *
Utah 13.0 0.71 9.9 1.7 3.1 4.5 0.23 3.3 0.55 1.2
Vermont 35.0 2.16 35.5 2.4 −0.4 14.9 0.84 13.2 1.03 1.7
Virginia 18.2 0.44 18.9 2.4 −0.7 4.2 0.15 3.9 0.64 0.3
Washington 24.9 0.58 25.9 2.5 −1.0 6.9 0.13 6.8 0.61 0.2
West Virginia 35.8 1.36 34.0 3.1 1.8 8.8 0.40 6.8 1.06 2.0
Wisconsin 22.1 0.68 24.5 2.0 −2.4 8.5 0.21 9.4 0.82 −0.9
Wyoming 23.8 1.92 21.9 2.2 1.9 4.3 0.50 4.4 0.73 −0.1
State Ave. (SD) −0.4 (3.5) 0.7 (1.0)
F-test *** ***

Notes. Effort was made to harmonize the populations represented by the two surveys; however, some differences remain due to differences in population control totals. Logically edited values in the CPS-ASEC have been recoded to their original values.

*

p<.05

**

p<.001

***

p<.001.

ACS, American Community Survey; CPS, Current Population Survey; SE, standard error expressed as a %; Significance, significance test for the difference of ACS and CPS.

Source: U.S. Census Bureau 2008 American Community Survey, Public Use Microdata Sample and CPS-ASEC 2009.

Adults showed a different pattern. On average the ACS rate was 0.7 points higher than the CPS-ASEC (SD=1.0) and the ACS found a higher percentage of means-tested coverage in the vast majority of states (43). The absolute differences ranged from 0.1 in Wyoming to 3.1 in Arkansas and the surveys produced statistically different results in 10 states, but only 2 showed substantial evidence (p<.001).

Comparison of Standard Errors

Table S1 includes observation counts and standard errors for the percent of nonelderly uninsured in single-year ACS and single-year, 2-year, and 3-year averaged CPS-ASEC data. At the national level the ACS produces standard errors that are nearly 60 percent smaller than single-year CPS and about 50 percent smaller than 2- and 3-year CPS-ASEC data. Because the CPS-ASEC oversamples smaller states and the ACS does not, the results vary by state (U.S. Census Bureau 2009a,e;) Compared with the single-year CPS-ASEC, the standard error of ACS estimates ranges from 80 percent smaller in larger states (e.g., NY, CA, PN) to 20 percent smaller in smaller states (e.g., VT and NH). These results hold for averaged data as well. For 3-year data, the standard error in some states is nearly identical between surveys (e.g., WY, VT, ME), but the ACS's advantage remains for larger states such as CA, where the ACS standard error is 70 percent smaller. The standard error ratios vary by state.

DISCUSSION

The ACS and CPS-ASEC are the only federal data sources that produce annual state-level health insurance coverage estimates for all age groups. Overall, these surveys produce strikingly similar estimates of health insurance coverage despite several methodological differences. The design differences we outline in this paper are manifold and include the reference period, mode of administration, and lack of program name fills in the ACS. We removed the CPS-ASEC's logical edit so that the differences we report (but not the respective coverage levels) were more representative of these design differences and of future data years. Nonetheless, these design differences do not appear to cause vastly divergent estimates at the national level nor for most states. We expect that the differences we observed for uninsurance will become smaller in the future when the Census Bureau corrects the imputation bias in the CPS-ASEC. This change should reduce the CPS-ASEC's uninsured estimate (Davern et al. 2007).

The similarity of the ACS and CPS-ASEC estimates likely arises from an accumulation of error rather than a lack of bias. Error in the CPS biases overall coverage downwards. Previous studies have found that a minority of respondents misinterpret the reference period as PIT. Many others correctly identify the reference period but inaccurately respond because of the lengthy recall period (Pascale 2008; Klerman et al. 2009). Finally, downward bias is exerted by the imputation routine (Davern et al. 2007). In the ACS, the error to overall coverage is also likely biased downward, as it is nearly every survey. However, we suspect that the error in the ACS, relative to the concept it attempts to measure (PIT), is smaller than the CPS-ASEC. This proposition is motivated by previous research that has found that CPS-ASEC error results in an estimate that mimics other surveys' (e.g., NHIS) PIT measure. Given that ACS closely tracks the CPS-ASEC in many instances and that we were unable to find any serious anomalies in the ACS insured/uninsured estimate leads us to believe that the ACS is a reliable PIT measure for the populations we considered. However, there are several potential sources of error in the ACS. Such sources include the lack of local program names in the ACS, which may cause inaccurate reporting of public program enrollment. Additionally, no previous federal health insurance survey has relied on written administration, as roughly 50 percent of the ACS sample does. The ability of a mail instrument, which lacks complex skip patterns, a verification item, and the assistance of an interviewer, to measure health insurance coverage is still poorly understood and in need of further research.

We did find some differences between the surveys in the coverage type distribution and in some estimates at the state level. In a few instances these differences were substantial. For example, the two surveys' means-tested rate diverged by 10 percentage points in HI and NM for children under 19. Our F-test results showed that the differences at the state level were not random variations around a common mean, but that they significantly varied across state. The heterogeneity in the differences of child means-tested rates could be driven by the relative accuracy and completeness of the CPS-ASEC name fills and/or demographic differences across the states.

Our results suggest that the two surveys classify insurance type differently. The most serious issue we identified in the ACS was the direct purchase rate. Even though the level of any insurance was similar between sources, the ACS direct purchase rate was over 1.5 times higher than the CPS-ASEC rate for all ages and this relationship held for children and adults separately. This result is particularly worrisome because the CPS-ASEC is known to overcount direct purchase (Cantor et al. 2006). Some authors have suggested that Medicaid enrollees in managed care and/or CHIP enrollees misreport their coverage as privately obtained (Lo Sasso and Buchmueller 2004; Cantor et al. 2006; Chattopadhyay and Bindman 2006;). Such a process might occur in the ACS and be exacerbated by the lack of local program names.

However, surveys of Medicaid enrollees have found that those in managed care are more accurate reporters than other types of beneficiaries (Call et al. 2008). Other studies have found that Medicaid enrollees generally report they have public coverage (rather than private); even though they may misreport the type of public coverage they have (Call et al. 2001). So the direct purchase result is likely not driven entirely by those with public coverage. There is good reason to believe the direct purchase item is picking up people with ESI. In the ACS, the direct purchase item does not prompt respondents that direct purchase is not coverage from current or former employers as is done in the CPS-ASEC. Excess direct purchase may also arise from people reporting about single service plans, such as dental insurance or it could be an artifact of the mode of administration. Understanding the ACS's direct purchase estimate and its implication to misclassification bias will be an important area of future research. This research may show that small modifications to the item, such as adding information about what direct purchase is not, will be beneficial. It may also yield strategies for postcollection adjustments.

Identifying sources of error in the ACS and putting reasonable bounds on the level of bias will be an important area of future research. Studies that link survey data to administrative records will provide insight into the extent to which public coverage is being undercounted in the ACS. However, linking the ACS to administrative data will be characterized by a different set of challenges than previously faced in CPS linking projects (Davern et al. 2009). On the one hand, the ACS universe (i.e., its inclusion of housing units and all group quarter types) is more closely aligned to administrative data than the CPS-ASEC is. On the other, there is no way to separate Medicaid coverage from other means-tested coverage in the ACS and the efforts to align the two concepts may prove intractable. An additional stream of experimental research that fields ACS style instruments to people whose insurance status is known a priori may be able to overcome these difficulties.

Our comparison of standard errors showed that ACS estimates have smaller variance than single or multiple years of CPS-ASEC data, highlighting a key advantage of the ACS. The standard errors for uninsurance in the largest states in the ACS were about equal to the single-year standard error of the United States as a whole in the CPS-ASEC. However, there was considerable variation across state—standard errors were anywhere from 70 percent smaller to 10 percent larger when compared with 3-year averaged CPS data. The standard error ratios vary by state, in part, because the CPS-ASEC oversamples smaller states and the ACS does not (U.S. Census Bureau 2009a,e;). The CPS may be better (or equally) suited for contrasts across time because its sample comes from a rotating panel. Roughly 30 percent of the sample is the same in adjacent years. Future analyses of the variance of cross-year differences are needed.

CONCLUSIONS

The ACS is well poised to become a critically important source of information about health insurance coverage in the United States, particularly for small subgroups and localized populations. The distribution of health insurance coverage in the United States will undoubtedly change in the coming years due to national and state-based health reform. In order to accurately track and explain the complexities of this process, the research and policy community will need to avail itself of several data resources, including the ACS, CPS-ASEC, state surveys, and administrative records.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: We are under contract with the U.S. Census Bureau to evaluate the ACS's health insurance estimates. Our relationship with the Census Bureau has not influenced our interpretation of the results presented in this manuscript nor did that contract support our time while conceptualizing or drafting the manuscript.

Preparation of this manuscript was funded by grant no. 065902 from the Robert Wood Johnson Foundation to the State Health Access Data Assistance Center. This paper does not necessarily reflect the opinion of NORC at the University of Chicago.

Disclosures: None.

Disclaimers: None.

NOTE

1

To view the item as it appears in the ACS instrument, visit http://www.census.gov/acs/www/SBasics/Information/health_ins.htm

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

hesr0046-0210-SD1.doc (80.5KB, doc)

Table SA1: Comparison of ACS and CPS Sample Size and Uninsurance Standard Error, Non-Elderly U.S. Civilian Noninstitutional Population (Obs in thousands).

hesr0046-0210-SD2.doc (117.5KB, doc)

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

REFERENCES

  1. Blewett L, Davern M. Meeting the Need for State Level Estimates of Health Insurance Coverage: Use of State and Federal Survey Data. Health Services Research. 2006;41(3, Part I):946–75. doi: 10.1111/j.1475-6773.2006.00543.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Blewett L, Davern M. Distributing State Children's Health Insurance Program Funds: A Critical Review of the Design and Implementation of the Funding Formula. Journal of Health Policy, Politics, and Law. 2007;32(3):415–54. doi: 10.1215/03616878-2007-010. [DOI] [PubMed] [Google Scholar]
  3. Call KT, Davidson G, Davern M, Nyman R. Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence. Health Services Research. 2008;43(3):901–14. doi: 10.1111/j.1475-6773.2007.00808.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Call KT, Davidson G, Sommers AS, Feldman R, Farseth P, Rockwood T. Uncovering the Missing Medicaid Cases and Assessing Their Bias for Estimates of the Uninsured. Journal Information. 2001;38(4):396–408. doi: 10.5034/inquiryjrnl_38.4.396. [DOI] [PubMed] [Google Scholar]
  5. Cantor JC, Monheit AC, Brownlee S, Schneider C. The Adequacy of Household Survey Data for Evaluating the Nongroup Health Insurance Market. Health Services Research. 2006;42(4):1739–57. doi: 10.1111/j.1475-6773.2006.00662.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Chattopadhyay A, Bindman AB. The Contribution of Medicaid Managed Care to the Increasing Undercount of Medicaid Beneficiaries in the Current Population Survey. Medical Care. 2006;44(9):822–6. doi: 10.1097/01.mlr.0000218835.78953.53. [DOI] [PubMed] [Google Scholar]
  7. Congressional Budget Office (CBO) 2003. “How Many People Lack Health Insurance and for How Long?” [accessed on May 15, 2010]. Available at: http://www.cbo.gov/ftpdocs/42xx/doc4210/05-12-Uninsured.pdf.
  8. Davern M, Blewett LA, Bershadsky B, Arnold N. Missing the Mark? Imputation Bias in the Current Population Survey's State Income and Health Insurance Coverage Estimates. Journal of Official Statistics. 2004;20(3):519–50. [Google Scholar]
  9. Davern M, Quinn BC, Kenney GM, Blewett LA. The American Community Survey and Health Insurance Coverage Estimates: Possibilities and Challenges for Health Policy Research. Health Services Research. 2009;44(2, Part 1):593–605. doi: 10.1111/j.1475-6773.2008.00921.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Davern M, Rodin H, Blewett LA, Call KT. Are the Current Population Survey Uninsurance Estimates Too High? An Examination of the Imputation Process. Health Services Research. 2007;42(5):2038–55. doi: 10.1111/j.1475-6773.2007.00703.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Eberly T, Pohl MB, Davis S. Undercounting Medicaid Enrollment in Maryland: Testing the Accuracy of the Current Population Survey. Population Research and Policy Review. 2009;28:221–36. [Google Scholar]
  12. Griffin D, Hughes TR. Mixed Mode Data Collection Methods in the American Community Survey; Presented at the Annual Meeting of the American Association of Public Opinion Research; Chicago, IL. 2010. [Google Scholar]
  13. Klerman JA, Davern M, Call KT, Lynch V, Ringel J. Understanding the Current Population Survey's Insurance Estimates and the Medicaid ‘Undercount. Health Affairs. 2009;28(6):w991–w1001. doi: 10.1377/hlthaff.28.6.w991. [DOI] [PubMed] [Google Scholar]
  14. Lo Sasso AT, Buchmueller TC. The Effect of the State Children's Health Insurance Program on Health Insurance Coverage. Journal of Health Economics. 2004;23(5):1059–82. doi: 10.1016/j.jhealeco.2004.03.006. [DOI] [PubMed] [Google Scholar]
  15. Lynch V, Boudreaux M, Davern M. 2010. “Applying and Evaluating Logical Coverage Edits to Health Insurance Coverage in the American Community Survey” [accessed on August 22, 2010]. Available at: http://www.census.gov/hhes/www/hlthins/publications/coverage_edits_final.pdf.
  16. Nelson CT, Mills RJ. 2001. “The March CPS Health Insurance Verification Question and Its Effect on Estimates of the Uninsured” [accessed on March 5, 2010]. Available at: http://www.census.gov/hhes/www/hlthins/verif.html.
  17. Pascale J. Measurement Error in Health Insurance Reporting. Inquiry. 2008;45:422–37. doi: 10.5034/inquiryjrnl_45.04.422. [DOI] [PubMed] [Google Scholar]
  18. Rajan S, Zuckerman S, Brennan N. Confirming Insurance Coverage in a Telephone Survey: Evidence from the National Survey of America's Families. Inquiry: A Journal of Medical Care Organization, Provision and Financing. 2000;37(3):317–27. [PubMed] [Google Scholar]
  19. State Health Access Data Assistance Center. 2009. “Comparing Federal Government Surveys That Count Uninsured People in America, September 2009” [accessed on January 4, 2010]. Available at: http://www.shadac.org/publications/comparing-federal-government-surveys-count-uninsured-people-in-america-sept-2009.
  20. Swartz K. Interpreting the Estimates from Four National Surveys of the Number of People without Health Insurance. Journal of Economic and Social Measurement. 1986;14(3):233–42. [PubMed] [Google Scholar]
  21. Turner J, Boudreaux M, Lynch V. 2009. “A Preliminary Evaluation of Health Insurance Coverage in the 2008 American Community Survey” [accessed on November 15, 2009]. Available at: http://www.census.gov/hhes/www/hlthins/acs08paper/index.html.
  22. U.S. Census Bureau. Design and Methodology: American Community Survey. Washington, DC: U.S. Government Printing Office; 2009a. [Google Scholar]
  23. U.S. Census Bureau. 2009b. “Estimating ASEC Variances with Replicate Weights” [accessed on January 2, 2010]. Available at: http://www.bls.census.gov/cps_ftp.html.
  24. U.S. Census Bureau. 2009c. “PUMS Accuracy of the Data (2008)” [accessed on December 27, 2009]. Available at: http://www.census.gov/acs/www/Downloads/2008/AccuracyPUMS.pdf.
  25. U.S. Census Bureau. Source and Accuracy of Estimates for Income, Poverty, and Health Insurance. 2009d. [accessed on January 15, 2010]. Available at: http://www.census.gov/hhes/www/p60_236sa.pdf. [Google Scholar]
  26. U.S. Census Bureau. 2009e. “Technical Documentation: Current Population Survey, 2009 Annual Social and Economic Supplement. 2009” [accessed on January 2, 2010]. Available at: http://www.census.gov/apsd/techdoc/cps/cpsmar09.pdf.
  27. U.S. Census Bureau. 2009f. “Using the Data: Quality Measures” [accessed on April 22, 2010]. Available at: http://www.census.gov/acs/www/methodology/sample_size_and_data_quality/

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

hesr0046-0210-SD1.doc (80.5KB, doc)
hesr0046-0210-SD2.doc (117.5KB, doc)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES