During the 2000s, rapid adoption of cellular phones and foregoing of landline telephones (i.e., wireless substitution) were observed (1–3). This affected behavioral surveillance by creating a staggering decrease in coverage for surveys that relied on random-digit dialing (RDD) sampling, resulting in biased health estimates (1–5). During this time, innovative sampling approaches that integrated both landlines and cellular phones were developed (6). Recently, some have suggested that the rate of wireless substitution has reached the point at which it is no longer necessary to sample landlines (7, 8). Indeed, the 2011 National Young Adult Health Survey (NYAHS), a national cellular phone–only RDD survey, demonstrated equal sample quality and some efficiency relative to the Behavioral Risk Factor Surveillance System, which samples both landlines and cellular phones (7).
Emergent communication technologies continue to be rapidly adopted. There has been a dramatic increase in smart-phone use, particularly among males, younger adults, and minorities (9). These devices have communication abilities beyond those of traditional cellular phones that provide users alternative ways to communicate instantly (e.g., chat functions, social media), and these functions have been adopted rapidly in recent years (10). In the context of telephone survey methodology, perhaps the most worrisome development is technologies to verbally communicate via smartphone applications without using a telephone number. Young adults, men, and Latinos are most likely to use such features (10). Cellular phones are now used less as “traditional” telephones and more as broad communication and media devices.
Although the impact of this technological advance on RDD sampling of cellular phones is not known, it may affect cellular phone surveys in a manner similar to that in which wireless substitution affected landline surveys. As such, we seek to answer 2 questions. 1) Is the use of cellular phone RDD a stable method of sampling? 2) Has the sample representativeness changed (i.e., deteriorated or improved) for population subgroups?
METHODS
In the present analysis, we used data from waves I and II of the NYAHS. Details on the overall design of the NYAHS have been published previously (7). Briefly, the NYAHS is a national cellular phone–only RDD survey of young adults (18–34 years of age) stratified by Census region. Waves I and II were conducted in 2011 and 2013, respectively. The Institutional Review Board at Rutgers Biomedical Health Sciences approved the procedures for the NYAHS.
The base-weighted distributions of subjects’ demographic characteristics were compared with those in the Census data (11). The base weight adjusts for design factors, which isolates the potential effects of coverage and nonresponse on sample quality. The mean absolute deviation across all demographic subgroups was calculated as a summary measure of sample quality. The American Association of Public Opinion Research's response rate 4, cooperation rates 2 and 4, and refusal rate 2 were calculated from the numbers of completed interviews, partially completed interviews, and subjects with either known or unknown eligibility and were compared between waves I and II. The specifics of these calculations have been described in detail previously (7, 12) and are also shown in Table 1.
Table 1.
Demographic Characteristic | Wave I |
Wave II |
2010 Census % | ||||
---|---|---|---|---|---|---|---|
No. | Unweighted % | Weighted % | No. | Unweighted % | Weighted % | ||
Sex | |||||||
Male | 1,372 | 47.8 | 48.0 | 1,509 | 48.8 | 48.3 | 50.6 |
Female | 1,499 | 52.2 | 52.0 | 1,586 | 51.2 | 51.8 | 49.4 |
Age group, years | |||||||
18–21 | 785 | 27.3 | 28.8 | 831 | 26.8 | 28.5 | 25.0 |
22–24 | 589 | 20.5 | 20.5 | 603 | 19.5 | 19.6 | 17.7 |
25–29 | 873 | 30.4 | 30.0 | 903 | 29.2 | 28.4 | 29.4 |
30–34 | 624 | 21.7 | 20.7 | 758 | 24.5 | 23.4 | 27.8 |
Race/ethnicitya | |||||||
White, non-Latino | 1,691 | 58.9 | 57.7 | 1,741 | 56.3 | 55.0 | 57.5 |
Black, non-Latino | 334 | 11.6 | 12.7 | 417 | 13.5 | 14.7 | 13.4 |
Latino | 391 | 13.6 | 13.7 | 480 | 15.5 | 15.8 | 20.3 |
Asian, non-Latino | 198 | 6.9 | 6.9 | 280 | 9.1 | 9.1 | 5.5 |
Other, non-Latino | 218 | 7.6 | 7.7 | 109 | 3.5 | 3.2 | 3.3 |
Mean absolute deviation | 3.0 | 3.0 | 1.9 | 2.5 | |||
Dispositions | |||||||
Complete | 2,874 | 1.6 | 3,095 | 1.5 | |||
Eligible and contacted but not interviewed | 2,683 | 1.5 | 3,259 | 1.6 | |||
Eligibility undetermined | 95,040 | 53.6 | 117,765 | 57.2 | |||
Not eligible | 76,779 | 43.3 | 81,613 | 39.7 | |||
Total | 177,376 | 205,732 | |||||
Response ratesb | |||||||
Response rate 4c | 24.0 | 20.8 | |||||
Cooperation rate 2d | 51.7 | 48.1 | |||||
Cooperation rate 4e | 64.2 | 61.7 | |||||
Refusal rate 2f | 13.4 | 12.9 |
a The total sums to less than 100 because of nonresponse to 1 or more of the race or ethnicity questions.
b Response rates were based on categories from the American Association for Public Opinion Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys (12).
c Response rate 4 was calculated as the number of completed interviews plus the number of partially completed interviews divided by the total number of known eligible subjects plus an estimate of the number of eligible subjects among those whose eligibility was undetermined.
d Cooperation rate 2 was calculated as the number of completed interviews plus the number of partially completed interviews divided by the total number of contacts known to be eligible.
e Cooperation rate 4 was calculated the same way as cooperation rate 2, but it excludes persons who were eligible but unable to participate because of poor health, communication problems, and other barriers.
f Refusal rate 2 was calculated as the number of people who refused to participate plus breakoffs divided by the number of both interviewed and noninterviewed contacts known to be eligible plus an estimate of the number of eligible subjects among those whose eligibility was undetermined.
RESULTS
Table 1 presents the distribution of demographic characteristics and response rates for waves I and II. The base-weighted mean absolute deviation was 3.0 for wave I and 2.5 for wave II. In wave I, the most notable deviations from the 2010 Census data were among 18–21-year-old subjects, 30–34-year-old subjects, Latinos, and persons of other non-Latino races/ethnicities; in wave II, the most notable deviations were among 18–21-year-old subjects, 30–34-year-old subjects, Latinos, and non-Latino Asians.
The rates of subjects who completed interviews and of those who did not complete interviews among those contacted and identified as eligible were comparable between the 2 study waves. The number of subjects with undetermined eligibility was slightly higher and the number of subjects who were not eligible was slightly lower in wave II relative to wave I. Values for response rate 4, cooperation rates 2 and 4, and refusal rate 2 were comparable between waves.
DISCUSSION
In the present analysis, we found that RDD of only cellular phones produced an unbiased sample across a number of demographic characteristics, as demonstrated by the base-weighed comparisons to the Census data. This demonstrates that the rapidly changing cellular phone technology and behaviors—in particular the 3-fold increase of video calling over just a 2-year period, coupled with increased used of texting and e-mailing through phones (10)—have not negatively impacted sample quality. Had the rapid adoption of these technologies impacted sample quality, we would expect it to be especially marked among young adults, who are most likely to use these functions (10).
The base-weighted demographic characteristic distribution showed particularly close coverage among white and black non-Latinos. Although the survey underrepresented Latinos and 30–34 year olds in both waves, improvements were seen from wave I to wave II. Moreover, these deviations are modest considering their vast underrepresentation among respondents in landline RDDs (6), and poststratification weighting should, as is typically done in RDD health surveys, incorporate race/ethnicity and age to adjust for nonresponse and coverage. This corrects for under- and overrepresentation and helps ensure unbiased health estimates among subgroups.
The comparison of call outcomes found an equal proportion that identified eligible respondents and produced completed surveys. The largest difference was the increased proportion of calls with undetermined eligibility in wave II compared with wave I. We believe this is an artifact of an embedded call-efficiency experiment in wave II, which tested 2 emerging “list-assisted” sampling approaches for cellular phones and found that call efficiencies and response rates varied across conditions (13). As such, this suggests that the increase in use of alternative communication technologies has not manifested in poorer call outcomes.
A limitation of the present analysis is that the timespan over which we evaluated the stability of sample quality was limited to 2 years. However, as we noted above, this is a period that saw an impressively rapid adoption of smart phones and its alternative communication features. Also, we compared the distribution across a number of demographic characteristics of interest to public health and survey methodology. It may be that changes occurred for other characteristics that we did not measure.
In summary, we found that RDD of only cellular phones remains a feasible methodology for collecting health data from young adults through the age of 34 years, despite the rapidly changing mobile phone environment that may pose threats to sample quality. However, continued monitoring of the quality of RDD surveys as mobile technology and cellular phone use behaviors continue to evolve is warranted.
Acknowledgments
This work was supported by the National Institutes of Health (grant R01CA149705).
The views expressed in the article do not necessarily represent the views of the National Institutes of Health, Rutgers, The State University of New Jersey, or ICF International, Inc.
Conflict of interest: none declared.
References
- 1.Blumberg SJ, Luke JV. Reevaluating the need for concern regarding noncoverage bias in landline surveys. Am J Public Health. 2009;99(10):1806–1810. doi: 10.2105/AJPH.2008.152835. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Blumberg SJ, Luke JV, Cynamon ML. Telephone coverage and health survey estimates: evaluating the need for concern about wireless substitution. Am J Public Health. 2006;96(5):926–931. doi: 10.2105/AJPH.2004.057885. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Blumberg SJ, Luke JV. Wireless Substitution: Early Release of Estimates From the National Health Interview Survey, July–December 2013. Atlanta, GA: National Center for Health Statistics; 2014. http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201407.pdf. Published July, 2014. Accessed September 4, 2014. [PubMed] [Google Scholar]
- 4.Blumberg SJ, Luke JV, Ganesh N, et al. Wireless substitution: state-level estimates from the National Health Interview Survey, January 2007–June 2010. Natl Health Stat Report. 2011;(39):1–26. 28. [PubMed] [Google Scholar]
- 5.Delnevo CD, Gundersen DA, Hagman BT. Declining estimated prevalence of alcohol drinking and smoking among young adults nationally: artifacts of sample undercoverage? Am J Epidemiol. 2008;167(1):15–19. doi: 10.1093/aje/kwm313. [DOI] [PubMed] [Google Scholar]
- 6.American Association for Public Opinion Research Cell Phone Task Force. New Considerations for Survey Researchers When Planning and Conducting RDD Telephone Surveys in the U.S. With Respondents Reached via Cell Phone Numbers. Deerfield, IL: American Asociation for Public Opinion Research; 2010. http://www.aapor.org/AM/Template.cfm?Section=Cell_Phone_Task_Force_Report&Template=/CM/ContentDisplay.cfm&ContentID=3189. Published October 28, 2010. Accessed September 21, 2012. [Google Scholar]
- 7.Gundersen DA, ZuWallack RS, Dayton J, et al. Assessing the feasibility and sample quality of a national random-digit dialing cellular phone survey of young adults. Am J Epidemiol. 2014;179(1):39–47. doi: 10.1093/aje/kwt226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Peytchev A, Neely B. RDD telephone surveys: toward a single-frame cell-phone design. Public Opin Q. 2013;77(1):283–304. [Google Scholar]
- 9.Pew Research Center Internet and American Life Project. Pew research internet project: device ownership over time. http://www.pewinternet.org/data-trend/mobile/device-ownership/ Accessed July 2, 2014.
- 10.Duggan M. Cell Phone Activities 2013. Washington, DC: Pew Research Center Internet and American Life Project; 2013. http://www.pewinternet.org/files/old-media//Files/Reports/2013/PIP_Cell%20Phone%20Activities%20May%202013.pdf. Published September 16, 2013. Accessed July 2, 2014. [Google Scholar]
- 11.Bureau of the Census, US Department of Commerce. Washington, DC: Bureau of the Census; 2010. 2010 census summary file 1. http://www2.census.gov/census_2010/04-Summary_File_1/ Updated August 25, 2011. Accessed January 7, 2013. [Google Scholar]
- 12.American Association for Public Opinion Research. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th ed. Deerfield, IL: American Asociation for Public Opinion Research; 2011. http://www.aapor.org/AM/Template.cfm?Section=Standard_Definitions2&Template=/CM/ContentDisplay.cfm&ContentID=3156. Accessed September 21, 2012. [Google Scholar]
- 13.Robb WH, Brown PK, Mark A, et al. Strategies for increasing efficiency of cellular telephone samples. the 69th American Association for Public Opinion Research Annual Conference; May 15–18, 2014; Anaheim, CA. [abstract] Presented at. [Google Scholar]