Abstract
Ownership of a bank account is an objective measure and should be relatively easy to elicit via survey questions. Yet, depending on the interview mode, the wording of the question and its placement within the survey may influence respondents’ answers. The Health and Retirement Study (HRS) asset module, as administered online to members of the Understanding America Study (UAS), yielded substantially lower rates of reported bank account ownership than either a single question on ownership in the Current Population Survey (CPS) or the full asset module administered to HRS panelists (both interviewer-administered surveys). We designed and implemented an experiment in the UAS comparing the original HRS question eliciting bank account ownership with two alternative versions that were progressively simplified. We document strong evidence that the original question leads to systematic underestimation of bank account ownership. In contrast, the proportion of bank account owners obtained from the simplest alternative version of the question is very similar to the population benchmark estimate. We investigate treatment effect heterogeneity by cognitive ability and financial literacy. We find that questionnaire simplification affects responses of individuals with higher cognitive ability substantially less than those with lower cognitive ability. Our results suggest that high-quality data from surveys start from asking the right questions, which should be as simple and precise as possible and carefully adapted to the mode of interview.
Keywords: Bank account ownership, Cognitive ability, Experiment, Online probability panel, Question wording
1. INTRODUCTION
In this paper, we report on a simple question-wording experiment motivated by an empirical observation. In a series of questions asked in a probability-based online panel, the rate of bank account ownership was found to be substantially lower than that obtained from two interviewer-administered surveys. We surmised that the explanation lay in the question’s wording rather than the mode of data collection, and designed an experiment to test simplified versions of the question. We first describe the context of the experiment and the original question about bank account ownership, before presenting the experiment itself and the results of our analysis.
2. BACKGROUND AND CONTEXT
The context of this experiment is the Understanding America Study (UAS), a nationally representative Internet panel of around 6,000 individuals at the time the experiment was conducted (see https://uasdata.usc.edu), drawn via address-based sampling from the US population aged 18 years and older. Included among the numerous surveys administered in the UAS is a replica of the Health and Retirement Study (HRS) questionnaire. (A comparison of survey outcomes between the HRS and its replica in the UAS can be found in Angrisani, Finley, and Kapteyn (2019).) While the survey instrument is the same, there are important differences between the HRS and its UAS replica, some of which are particularly relevant for our study. First, the HRS core interviews are conducted either face-to-face or by telephone. (In 2018, the HRS introduced an experiment testing a sequential mixed-mode Web-telephone approach (see Ofstedal, Kézdi, and Couper 2021).) In contrast, interviews in the UAS are self-administered online. Second, the HRS surveys individuals over the age of 50 years (and their spouses regardless of age), whereas the UAS–HRS is administered to all panel members with no age-eligibility requirements. Third, since UAS surveys do not typically exceed 30 minutes of interview time, the relatively lengthy HRS instrument is divided into a series of stand-alone subsurveys (e.g., employment module and asset module). Finally, similarly to the HRS, UAS participants are invited to answer the HRS questionnaire every two years. However, they take the surveys at different points in time, depending on when they completed the previous HRS series.
3. BANK ACCOUNT OWNERSHIP IN THE UAS–HRS
The HRS module about financial assets is very detailed and comprehensive (see https://hrs.isr.umich.edu). It elicits ownership of various financial assets and, conditional on ownership, value and other asset details. In the following order, the questionnaire asks about 1) IRAs and KEOGH accounts; 2) pension accounts (e.g., 401k); 3) annuities; 4) stocks or stock mutual funds; 5) bonds or bond funds; 6) checking or savings accounts and money market funds; and 7) certificates of deposit (CDs), government savings bonds, or Treasury-bills. The format and order of these questions has been essentially unchanged since the first wave of the HRS in 1992.
Typically, a household is considered “banked” if its members own a checking or savings account (Federal Deposit Insurance Corporation 2018). In the HRS questionnaire, banked individuals are those who answer the following question affirmatively:
Aside from anything you have already mentioned, do you (or your husband/wife/partner/spouse) have any checking or savings accounts or money market funds? Please do not include certificates of deposit (CDs).
In Wave 1 of the UAS–HRS, collected between 2015 and 2017, the weighted fraction of banked respondents was 55 percent out of 5,100 survey participants. In contrast, the rate of bank account ownership in the United States in 2017, ascertained from a supplement to the interviewer-administered Current Population Survey (CPS) was 93.5 percent (Federal Deposit Insurance Corporation [FDIC] 2018). The CPS question was worded as follows: Do you (or anyone else in your household) have a checking or savings account now? Referring to the population older than 50, the CPS estimate of banked households in 2017 was 95.9 percent, compared with the HRS estimate from 2016 of 79.2 percent, and the UAS–HRS estimate from 2015 to 2017 of 64.4 percent.
These substantial discrepancies in estimates raised questions about possible reasons for the difference. We identified a number of potential explanations.
Selection or nonresponse bias: CPS employs a rotating panel design (see U.S. Census Bureau 2019) and achieved a response rate of 86 percent for the 2017 supplement (FDIC 2018). Both HRS and UAS are panel studies with periodic refreshment. HRS reports response rates in the high 80s (87.1 percent in 2014; see https://hrs.isr.umich.edu). The UAS recruitment rate is reported between 10 percent and 12 percent (see https://uasdata.usc.edu/page/Response+And+Attrition), with response to any particular survey in the high 80s. Despite the differences in sample design and response rates, we think this is unlikely to account for the large differences we observe.
Mode: As noted above, HRS and CPS are both interviewer-administered, while UAS is self-administered. In a mode experiment in the 2018 HRS, Ofstedal, Kézdi, and Couper (2021) report bank account ownership rates of 85.3 percent for the interviewer-administered mode and 77.0 percent for the web-first mode; however, they document no differences in rates of ownership of other assets by mode. Others papers exploring mode effects have found little evidence of differences in nonsensitive factual questions (see, e.g., Allum, Conrad, and Wenz 2018; Felderer, Kirchner, and Kreuter 2019; Burton and Jäckle 2020). This suggests that mode—likely due to the presence of the interviewer to clarify the question, rather than social desirability—may account for some of the difference we observe, but is unlikely to fully explain the large gap in ownership between the UAS–HRS and CPS estimates.
Motivated misreporting: Another possible explanation is that in the HRS and UAS–HRS, respondents may underreport asset ownership to avoid the extensive follow-up questions asked of those answering in the affirmative (see Eckman, Kreuter, Kirchner, Jäckle, Tourangeau, et al. 2014; Bach and Eckman 2018). In both these surveys, the bank account question comes after a series of questions on other types of assets. This is not the case in the CPS. This explanation should affect HRS and HRS–UAS equally, and does not account for the substantial discrepancy between these two surveys.
Question wording: Our final explanation relates to the complexity of the question’s wording in HRS and UAS–HRS. We speculated that the discrepancy could stem from the combination of the wording and placement of the question within the module. Specifically, after a series of questions about various financial assets, the expression Aside from anything you have already mentioned may trigger negative answers. Respondents may not remember the content of previous questions and think they already reported on a common financial asset such as checking/savings accounts. Furthermore, the note Please do not includeCDs may cause confusion given that no previous question asked about CDs and, more importantly, respondents may be less familiar with this type of asset, again inducing negative responses. The HRS question can be viewed as violating Schaeffer and Dykema’s (2020, p. 44) recommendation to integrate definitions (including inclusions and exclusions) into the question and placing them before the target question requests an answer. These possible effects are likely to be more apparent when the survey is self-administered than when it is administered by an interviewer who can offer clarifications and resolve respondent uncertainty.
Based on these considerations, we designed an experiment to test the hypothesis that simplifying the question wording would reduce the discrepancy between the bank account ownership estimate from UAS–HRS and the benchmark estimate from the CPS. We simplified the question about bank account ownership while keeping the question’s placement within the survey the same.
4. THE EXPERIMENT: SETUP AND RESULTS
4.1 Experimental Questionnaire
We designed an experimental survey about financial assets, which we administered to the entire UAS pool of panelists in December 2017 (Understanding America Study 2017). A total of 5,988 UAS members were invited to the survey, of whom 4,601 completed it, for a participation rate (American Association for Public Opinion Research [AAPOR] 2016) of 77 percent. The survey codebook and data are available at https://uasdata.usc.edu/survey/UAS+117. To mimic the structure of the HRS questionnaire, we asked about ownership of financial assets in the following order: (1) stocks or stock mutual funds; (2) bonds or bond funds; (3) CDs, government savings bonds, or treasury-bills; and (4) checking or savings accounts, or money market funds. Unlike the HRS and UAS–HRS, we did not include follow-up questions about asset values asked of asset owners. We randomly assigned respondents to one of three different questions about bank account ownership using an increasing level of simplification:
Original HRS: Aside from anything you have already mentioned, do you or your spouse/partner have any checking or savings accounts or money market funds? Please do not includeCDs.
Simple I: Do you or your spouse/partner have any checking or savings accounts or money market funds?
Simple II: Do you or your spouse/partner have any checking or savings accounts? Do you or your spouse/partner have any money market funds?
In the Simple I version, we removed the exclusionary text before and after the question. In Simple II, we separated the question into two parts. These changes are consistent with the literature on question wording (see, e.g., Schaeffer and Dykema 2020). Further, the changes increase the Flesch reading ease score (as measured in Microsoft Word) from 54.6 for the original to 68.9 for Simple I and 77.4 for Simple II (with a corresponding decline in the Flesch–Kinkaid reading grade level).
4.2 Methods
The number of respondents who completed the experimental module and answered all the financial asset ownership questions was 4,515. Of these survey participants, 1,481 received the HRS version of the bank account ownership question, 1,517 the Simple I version, and 1,517 the Simple II version. These three groups were well-balanced in terms of individual characteristics used in the analysis, namely demographics, cognitive ability, and financial literacy scores (see the Online Appendix Table A.1). To enhance comparability across treatments, we created a bank account ownership indicator for the Simple II version that takes value 1 if the respondent answers affirmatively either the question about checking/savings accounts or the one about money market funds. Using a narrower definition of bank account ownership for the Simple II version that only considers ownership of checking/savings accounts leads to the same conclusions.
We estimate treatment effects using OLS with heteroscedasticity-robust standard errors (we obtain virtually identical results using logit models). We perform unconditional analysis by regressing the bank account ownership indicator on treatment indicators for Simple I and Simple II, with the reference treatment being the original HRS question. Although treatments are well-balanced in terms of individual characteristics, we also carry out conditional analysis by adding to the model controls for gender, race/ethnicity, age, education, household income, cognitive ability, and financial literacy scores. For this analysis, the sample size is 4,272 observations because of missing values on conditioning variables. We explore heterogeneity in treatment effects by interacting treatment indicators with cognitive ability and financial literacy score quartile indicators. Since we are interested in treatment effects rather than in population-level parameters, we perform unweighted regression analyses (our conclusions are unchanged when applying survey weights). We report estimated OLS coefficients multiplied by 100, interpreting treatment effects as percentage point changes in the rate of bank account ownership.
5. EMPIRICAL RESULTS
5.1 Estimated Treatment Effects
Figure 1 compares the bank ownership rate in the three treatment groups. Table 1 shows the estimated treatment effects, unconditionally and conditional on individual characteristics. We estimate that removing the expression Aside from anything you have already mentioned and the note Please do not includeCDs increases self-reported bank account ownership by 10–11 percentage points.
Figure 1.

Bank Account Ownership Rates across Treatments.
Table 1.
Estimated Treatment Effects
| Coeff. | Std. Err. | 95% CI | N | |
|---|---|---|---|---|
| HRS to Simple I | ||||
| Unconditional | 10.77 | 1.46 | [7.89–13.64] | 4,515 |
| Conditional on individual characteristics | 9.63 | 1.34 | [7.01–12.26] | 4,272 |
| Simple I to Simple II | ||||
| Unconditional | 6.99 | 1.16 | [4.71–9.26] | 4,515 |
| Conditional on individual characteristics | 7.00 | 1.07 | [4.89–9.11] | 4,272 |
| HRS to Simple II | ||||
| Unconditional | 17.75 | 1.34 | [15.13–20.38] | 4,515 |
| Conditional on individual characteristics | 16.63 | 1.23 | [14.22–19.05] | 4,272 |
Notes: Dependent variable: binary indicator of bank account ownership. For Simple II version, the indicator is 1 if the respondent answers affirmatively either the question about checking/savings account ownership or the one about money market fund ownership. OLS estimates (multiplied by 100) with robust standard errors are reported. HRS to Simple I is the estimated coefficient of the Simple I treatment indicator. HRS to Simple II is the estimated coefficient of the Simple II treatment indicator. Simple I to Simple II is the difference between the estimated coefficients of the Simple II and Simple I treatment indicators with Delta Method robust standard errors. Individual characteristics include gender, race/ethnicity, age, education, household income, cognitive ability and financial literacy.
Separating the checking/savings accounts from money market funds further increases self-reported bank account ownership by 7 percentage points. A possible explanation for the latter finding is that respondents could be unfamiliar with money market funds (among those assigned to Simple II, only about 18 percent report having a money market fund) and more likely to provide negative answers when asked about ownership of checking or savings accounts or money market funds within the same question. Overall, the questionnaire simplification from the original HRS question to the Simple II version increases self-reported bank account ownership by 17–18 percentage points.
5.2 Treatment Effect Heterogeneity
In this section, we investigate heterogeneity by cognitive ability, motivated by two priors. First, when asked about checking or savings accounts after a series of questions about other financial assets, individuals with better cognitive functioning should remember more easily whether they reported on checking or savings accounts previously in the survey. For these respondents, the expression Aside from anything you have already mentioned should be less likely to trigger negative answers. Further, the reduced complexity of the question should make it easier for those with lower levels of cognitive ability to comprehend. Therefore, we expect the effect of moving from the original HRS question to Simple I to be significantly lower for respondents with higher cognitive ability. Second, there is evidence that cognitive ability plays a key role in financial literacy acquisition and retrieval of financial knowledge (Muñoz-Murillo, Álvarez-Franco, and Restrepo-Tobón 2020). Thus, even conditional on our available measure of financial literacy, individuals with higher levels of cognitive ability may be more familiar with different types of financial assets (in our sample, the correlation between cognitive and financial literacy scores is 0.59). Because of that, the note Please do not includeCDs in the original HRS question should be less confusing for respondents with higher cognitive ability. This would contribute to making the effect of moving from HRS to Simple I smaller for individuals with better cognitive functioning. More importantly, we would expect the separation of checking/savings accounts from money market funds implemented in Simple II to affect survey outcomes of respondents with higher cognitive ability to a lesser extent than those with lower cognitive ability. For the former, unfamiliarity with money market funds should be less likely to bias self-reports of bank account ownership. (We also investigate treatment effect heterogeneity by financial literacy conditional on cognitive ability. The results of this analysis are in online Appendix Table A.3 and Figure A.2.)
To test these hypotheses, we run regressions of bank account ownership on treatment indicators, indicators for quartiles of total cognition score, and the interaction between treatment and total cognition score quartiles, while controlling for demographics and financial literacy. Figure 2 shows estimated treatment effects at each cognition score quartile and indicates whether differences between quartiles are statistically significant (the full set of estimated regression coefficients is provided in Online Appendix Table A.3).
Figure 2.
Treatment Effect Heterogeneity by Cognitive Ability. Note: Dependent variable: binary indicator of bank account ownership. For Simple II version, the indicator is 1 if the respondent answers affirmatively either the question about checking/savings account ownership or the one about money market fund ownership. We estimate an OLS regression of bank account ownership on treatment indicators interacted with cognitive ability score quartile indicators. Other controls include gender, race/ethnicity, age, education, household income, and financial literacy score quartile indicators. HRS to Simple I is the marginal effect (multiplied by 100) of the Simple I treatment indicator evaluated at different cognition quartiles. HRS to Simple II is the marginal effect (multiplied by 100) of the Simple II treatment indicator evaluated at different cognition quartiles. Simple I to Simple II is the difference (multiplied by 100) between the marginal effects of the Simple II and Simple I treatment indicators evaluated at different cognition quartiles. We test pairwise differences in treatment effects between cognition quartiles. ***p < 0.01, **p < 0.05.
The results reveal striking differences across cognitive ability quartiles. In line with our priors, treatment effects are substantially smaller among individuals with higher cognitive scores than among those with lower cognitive scores. Overall, the questionnaire simplification increases the prevalence of bank account ownership by 26 percentage points in the bottom quartile and by only 7 percentage points in the top quartile. For the most part, differences between quartiles are statistically significant. The cognitive ability gradient is somewhat steeper when moving from Simple I to Simple II than from HRS to Simple I, pointing to differences in question comprehension along the cognitive ability distribution in driving treatment effect heterogeneity.
6. DISCUSSION AND CONCLUSION
Using a randomized experiment, we find that simplifying a question on bank account ownership substantially increases the reported rate of ownership. The effect is especially large for those with lower cognitive ability, even when controlling for age, education, and financial literacy. That is, the question simplification is particularly helpful for those who may have greater difficulty parsing and comprehending complex survey questions.
Our experiment elicited only asset ownership, but did not include follow-up questions about asset values asked to owners. Thus, we refrain from generalizing our findings to the full HRS asset questionnaire. Due to our design, we cannot explicitly test a motivated misreporting hypothesis, as this would require the full set of follow-up questions. Further, we are unable to test the effect of mode, but surmise that an interviewer’s presence may help mitigate the effects of complex question wording on responses.
Nonetheless, we conclude that simplification of a question substantially improves reporting of bank account ownership in a self-administered survey, and this is especially beneficial among those with fewer cognitive resources. Our findings suggest that high-quality data from surveys start from asking the right questions, which should be as simple and precise as possible and carefully adapted to the mode of interview. This conclusion is consistent with the questionnaire design literature (see, e.g., Fowler 1995; Schwarz, Park, Knäuper, and Sudman 1999; Tourangeau, Rips, and Rasinski 2001; Schaeffer and Presser 2003; Beatty, Collins, Kaye, Padilla, Willis, et al. 2020; Schaeffer and Dykema 2020). Our results also point to the importance of considering respondents with lower levels of cognitive ability when designing survey questions. This suggests testing question wording among those with lower cognition ability, especially when designing self-administered surveys.
SUPPLEMENTARY MATERIALS
Supplementary materials are available online at academic.oup.com/jssam.
Supplementary Material
Contributor Information
Marco Angrisani, Economist at the Center for Economic and Social Research and a Research Assistant Professor at the Economics Department at the University of Southern California, Los Angeles, CA 90089, USA.
Mick P Couper, Research Professor in the Survey Research Center, Institute for Social Research, University of Michigan, P.O. Box 1248, Ann Arbor, MI 48106, USA.
The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The project data described in this paper relies on a survey administered by the Understanding America Study, which is maintained by the Center for Economic and Social Research (CESR) at the University of Southern California.
M.A. received financial support by the National Institute on Aging of the National Institutes of Health under Award Number U01AG054580.
REFERENCES
- Allum N., Conrad F. G., Wenz A. (2018), “Consequences of Mid-Stream Mode-Switching in a Panel Survey,” Survey Research Methods, 12, 43–58. [Google Scholar]
- American Association for Public Opinion Research (AAPOR) (2016), Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys (9th ed.), Washington, DC: AAPOR. [Google Scholar]
- Angrisani M., Finley B., Kapteyn A. (2019), “Can Internet Match High-Quality Traditional Surveys? Comparing the Health and Retirement Study and Its Online Version,” Advances in Econometrics, 39, 3–33. 10.1108/S0731-905320190000039001 [DOI] [Google Scholar]
- Bach R. L., Eckman S. (2018), “Motivated Misreporting in Web Panels,” Journal of Survey Statistics and Methodology, 6, 418–430. [Google Scholar]
- Burton J., Jäckle A. (2020), “Mode Effects.” ISER: Understanding Society Working Paper Series No. 2020-05. Colchester: University of Essex.
- Beatty P., Collins D., Kaye L., Padilla J., Willis G., Wilmot A. (eds) (2020), Advances in Questionnaire Design, Development, Evaluation, and Testing, Hoboken, NJ: Wiley. [Google Scholar]
- Eckman S., Kreuter F., Kirchner A., Jäckle A., Tourangeau R., Presser S. (2014), “Assessing the Mechanisms of Misreporting to Filter Questions in Surveys,” Public Opinion Quarterly, 78, 721–733. [Google Scholar]
- Federal Deposit Insurance Corporation (FDIC) (2018), 2017 FDIC National Survey of Unbanked and Underbanked Households. Washington, DC: Federal Deposit Insurance Corporation. [Google Scholar]
- Felderer B., Kirchner A., Kreuter F. (2019), “The Effect of Survey Mode on Data Quality: Disentangling Nonresponse and Measurement Error Bias,” Journal of Official Statistics, 35, 93–115. [Google Scholar]
- Fowler F. J. (1995), Improving Survey Questions; Design and Evaluation, Thousand Oaks, CA: Sage. [Google Scholar]
- Mather N., Jaffe L. E. (2016), Woodcock–Johnson IV: Reports, Recommendations, and Strategies, Hoboken, NJ: Jossey-Bass. [Google Scholar]
- Muñoz-Murillo M., Álvarez-Franco P. B., Restrepo-Tobón D. A. (2020), “The Role of Cognitive Abilities on Financial Literacy: New Experimental Evidence,” Journal of Behavioral and Experimental Economics, 84, 101482. [Google Scholar]
- Ofstedal M. B., Kézdi G., Couper M. P. (2021), “Data Quality and Response Distributions in a Mixed-Mode Survey.” Paper under review. [DOI] [PMC free article] [PubMed]
- Schaeffer N. C., Dykema J. (2020). “Advances in the Science of Asking Questions,” Annual Review of Sociology, 46, 37–60. [Google Scholar]
- Schaeffer N. C., Presser S. (2003), “The Science of Asking Questions,” Annual Review of Sociology, 29, 65–88. [Google Scholar]
- Schwarz N., Park D., Knäuper B., Sudman S. (1999), Cognition, Aging and Self-Reports, Philadelphia, PA: Psychology Press. [Google Scholar]
- Tourangeau R., Rips L., Rasinski K. (2001), The Psychology of Survey Response, Cambridge, UK: Cambridge University Press. [Google Scholar]
- Understanding America Study (2017), UAS117, available at https://uasdata.usc.edu/survey/UAS+117.
- U.S. Census Bureau (2019), Current Population Survey Design and Methodology. Technical Paper 77, available at https://www2.census.gov/programs-surveys/cps/methodology/CPS-Tech-Paper-77.pdf (accessed April 2, 2021).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.

