Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 11.
Published in final edited form as: Ann Am Acad Pol Soc Sci. 2013 Jan;645(1):6–22. doi: 10.1177/0002716212463314

New Challenges to Social Measurement

Douglas S Massey 1, Roger Tourangeau 2
PMCID: PMC4263208  NIHMSID: NIHMS578338  PMID: 25506081

Americans frequently encounter the term “survey” in the mass media---whenever an author, reporter, commentator, or blogger makes reference to a poll or survey of some group of people, such as registered voters, likely consumers, or television viewers. Newspaper stories, magazine articles, radio programs, television broadcasts, and internet blogs are filled with data derived from surveys of one sort or another. Ultimately a survey is nothing more than a set of questions asked of some set of respondents drawn from a larger population. If the subset is chosen via standard sampling procedures from a reasonably complete list of individuals who make up the population, it constitutes a probability sample and averages and other statistics derived from it can be assumed to represent conditions in the population accurately, subject only to random variation.

A simple random sample, the sample design described most often in elementary statistics books, is one example of a larger family of sampling designs---probability samples---in which the probability of selection for each sample member can be determined. In the case of simple random sampling, the selection probabilities are equal for all individuals in the sample. More complicated designs may under- or over-sample some segment of the population to create unequal selection probabilities; but as long as the likelihood of selection for each sample member is known, it constitutes a probability sample and can be used, with appropriate weighting, to represent conditions in the population as a whole.

Probability surveys are thus considered to be “representative,” which means that over a large number of replications sample estimates would average out to the corresponding population values. Nonrandom, non-probability samples are not considered to be representative and they carry a serious risk of bias, which means that they will systematically over- or underestimate population values no matter how many sample replications are done. The direction and degree of the bias depends on the specific way in which sample members are selected into the survey. Surveys of magazine readers who voluntarily respond to a query from the publisher, for example, cannot be assumed to be representative, even of the magazine’s readers, because people tend to self-select into the pool of willing respondents based on personal and situational circumstances that are often related to the topic under study (such as interest in the subject of the survey).More formally, when the probabilities of responding are related to the survey values, bias results.

Reputable media outlets generally indicate whether or not a survey is representative; but much of the data routinely bandied about in the media and the internet are not based on representative samples and are of dubious use in making accurate statements about the populations they purport to represent. Consumers of survey data should always be attuned to the underlying sample design and how individuals were selected into the survey. Departures from probability selection methods always threaten the representativeness of findings.

Departures from probability sampling may occur even when researchers are trying hard to achieve it. The most common departure arises from non-response, which occurs whenever people chosen for a sample cannot or will not provide information to a survey. Respondents may be difficult to locate, in which case the factors that make them hard to find constitute potential sources of bias in drawing conclusions, particularly if these factors are strongly related to the outcome of interest. Even if they potential sample members are easy to contact, they may select themselves out of the survey by refusing to participate, in which case the factors that prompted the refusal constitute potential sources of bias, with the degree again depending on their relation to the topic of interest. An outright refusal to be surveyed is known as unit non-response. When a respondent refuses to answer specific questions, it is known as item non-response.

If non-response to a survey were a random process, of course, there would be no bias; but we generally cannot assume this to be the case. On the contrary, people who cannot be located or who refuse to participate tend to be systematically different from those who can be located and are willing to participate. It is safer to assume that people end up being non-respondents for decidedly non-random reasons, which are very often related to the topic under study. If the relative number of non-respondents is small, not much bias will be introduced into sample estimates even if the non-respondents are quite different from respondents---because of their small numbers they would carry little weight in the determining estimate even if they were included. If the share of non-respondents is large, however, considerable potential for bias exists, assuming they are indeed different from respondents in ways related to the outcome of interest. In general, the higher the rate of non-response, the greater the potential for bias in estimating population values from a survey.

Researchers have always worried about potential biases stemming from non-response, of course, but lately these worries have grown more acute as rates of refusal and non-contact have risen despite heightened efforts at recruitment and stronger inducements to participate. The decline in response rates has been particularly marked in the private sector, which commissions a plethora of polls and surveys covering everything from political preferences to perceptions of toothpaste. But response rates have also been falling in the public sector as well, which sponsors a wide variety of surveys to generate key federal and state statistics. Although most people are aware that opinion polls, attitude surveys and the like are based on samples, they are less aware that most publicly reported statistics themselves come from samples rather than tabulations of administrative data or actual population counts.

For example, almost all of what people commonly refer to as “census data” does not come from an actual enumeration of the U.S. population, but from a sample survey that is administered along with the census. The 2010 U.S. Census itself contained just four questions about the household and seven about each household member. These eleven questions together made up the census “short form.” Prior to 2010 all other “census data” came from a “long form” that was only administered to around a fifth of all U.S. households (Anderson 1988).The 2010 Census did not include a long form; the data formerly gathered on the long form are now compiled from the American Community Survey, which is based on a sample of around two million households. Moving forward, the ACS will replace the census long form and will be administered annually rather than decennially (Groves 2010).

In addition to the census, many other federal statistics also emanate from sample surveys rather than actual counts. Examples include the Current Population Survey (which produces monthly labor force statistics), the National Survey of Family Growth (birth and childbearing statistics), the National Health Interview Survey (health statistics), the National Assessment of Educational Progress (data on educational achievement), the Survey of Income and Program Participation (welfare and food stamp usage), and the American Housing Survey (housing statistics), just to name a few of the more prominent sources of federal data. Not only does the federal government administer its own surveys for purposes of statistical tabulation and publication; it also supports other surveys indirectly through grants, including the New Immigrant Survey, the Panel Study of Income Dynamics, , the American National Election Study, , and the General Social Survey, the Fragile Families Study, the National Longitudinal Survey of Adolescent Health, and the Health and Retirement Survey.

Obviously, then, a decline in survey response rates is not simply a threat to the validity of political and consumer polling or the integrity of social science research, but a fundamental threat to the federal statistical system itself (Prewitt 2010). In order to explore the depth and nature of the nonresponse challenge to social measurement, understand its causes, discern the likely size and direction of the resulting biases, and consider potential solutions to the problem, we invited leading survey researchers to Washington, D.C. for a one-day workshop,. With support from the Committee on National Statistics of the National Research Council and the Russell Sage Foundation, we compiled and edited the papers presented at this workshop and publish them here as a resource for social scientists, statisticians, policy makers, and the public. In this introduction, we set the stage by briefly reviewing the history of survey research and outlining some of the reasons that have been offered to account for the decline in response rates.

THE RISE OF SOCIAL SURVEYS

The idea of gathering information from a subset of people to make statements about a larger population has a long history. The roots of contemporary survey research lie in reform movements that coalesced in Britain and the United States in response to widespread urban poverty created by the industrial revolution (Converse 2009). The National Association for the Promotion of Social Science was founded in London in 1857 to encourage the compilation and dissemination data about British social conditions through “social surveys;” but its approach was less scientific than moralistic and the organization had little success beyond organizing a few conferences and lobbying legislators for poor relief before it expired in 1886 (Bulmer, Bales, and Sklar 1991).

Nonetheless, social entrepreneurs on both sides of the Atlantic continued to survey households living in slum neighborhoods to generalize about the status of the urban poor. An important early example was Charles Booth’s systematic survey of poor families in London’s East End, which he published in 17 volumes between 1889 and 1903 (Bales 1991). The United States counterpart was the social survey of poor Chicago neighborhoods done by Florence Kelly in concert with Jane Addams’ settlement house movement, published in 1895 as the Hull House Maps and Papers: A Presentation of Nationalities and Wages in a Congested District of Chicago (see Sklar 1991). Perhaps the greatest social survey of the epoch was done by W.E.B. DuBois in Philadelphia, who was perhaps the first trained social scientist carry out a social survey. His monumental study The Philadelphia Negro was published in 1899 (see Bulmer 1991a) and still remains in print (DuBois 2010).

These landmark surveys served as a model and inspired a host of subsequent surveys on diverse topics in different cities conducted over the first two decades of the 20th century. In Britain the number of social surveys reached a plateau of around ten per year on the eve of World War One, but surged to 45 in the late 1920s (Bulmer, Bales, and Sklar 1991). In the United States, the Russell Sage Foundation, which was established in 1907 to promote “the improvement of social and living conditions,” created a Department of Social Surveys in 1912 to develop survey research methods and spread the use of surveys in social research (Bulmer 1991b; Yeo 1991). Beginning with its Pittsburgh Survey of 1908, the number of social surveys grew rapidly in the United States to peak at 200 in 1922 before declining as the Great Depression approached ((Bulmer, Bales, and Sklar 1991).

During the 1920s, an increasingly sharp line emerged between surveys done for social reform and surveys done to promote scientific research, and, in the ensuing decade, social research shifted from a local enterprise done by public-spirited citizens to a national initiative run by scientific professionals (Bulmer 1991b). In 1924, a Statistical Laboratory was founded at Iowa State College and investigators began to develop the theoretical foundations and practical methods of probability sampling (Converse 2009). In 1933, the Roosevelt Administration established a joint Committee on Government Statistics and Information Services that pulled together not only statisticians but social demographers such as Samuel Stouffer, Philip Hauser, and William Ogburn to conduct research on statistical sampling.

By 1934, the committee had field tested a Trial Census of Unemployment to established the practical feasibility of household probability sampling. In independent work in the same year, Neyman (1934) mathematically proved that random selection, stratified sampling, cluster sampling, and multistage sampling yielded unbiased estimates whose precision could be estimated with confidence. Finally, in 1938, Philip Hauser moved to the Census Bureau and in 1940 spearheaded the addition of the long form as a regular component the decennial U.S. Census and in 1942 the Census Bureau began administering the Current Population Survey to a representative sample of the non-institutionalized U.S. population to generate monthly statistics on employment and earnings as well as annual population estimates (Bregger 1984).

Around the same time, surveys were becoming common in the private sector. In 1941, Harry Field founded the National Opinion Research Center (NORC) to promote the use of surveys in social science research (National Opinion Research Center 2010). A few years later, in 1946, Rensis Likert and colleagues founded the Survey Research Center at the University of Michigan (Converse 2009). George Gallup founded the first non-academic polling organization in 1958, followed by Lou Harris in 1963 (Igo 2007).

Since the 1960s, polls and surveys have become a regular part of the national scene, routinely used not only by researchers but also by politicians, pundits, reporters, marketers, planners, government officials, and members of the public to comment on social trends and changing patterns in American society. In addition to the Gallup and Harris firms, prominent survey research organizations now include Abt Associates, Research Triangle International (RTI), Zogby Associates, Westat, the Nielsen Company, and the Pew Research Center. Despite the proliferation of private pollsters and survey firms, however, the largest single sponsor of surveys is still the federal government, either directly through organizations such as the Census Bureau and the National Center for Health Statistics or indirectly through funding agencies such as the National Institutes of Health and the National Science Foundation.

As noted earlier, federal surveys---questionnaires fielded by government agencies at regular intervals to representative samples of respondents---provide the basis for many official statistics in the United States (Abraham and Nixon 2010). Surveys done at a single point in time or fielded at intervals are known as cross-sectional surveys. Although such surveys continue to be the workhorse for public and private statistics, in recent decades perhaps the greatest expansion in survey data has come from panel surveys done of representative samples of households or individuals followed over time and re-interviewed at regular intervals to create a longitudinal data file. The first nationally representative longitudinal survey was the National Longitudinal Survey of Young and Older Men, which was fielded between 1966 and 1981. It was quickly followed by the Panel Study of Income Dynamics, which was launched in 1968 and continues today, and the National Longitudinal Survey of Young and Mature Women, which went from 1968 to 2003.

With improvements in data processing capacity and the development of new statistical methods for longitudinal and multilevel analysis in the 1980s and 1990s (see Box-Steffensmeier and Jones 2004), panel studies grew in number to include the National Longitudinal Study of the High School Class of 1972; the 1979 and 1997 panels of the National Longitudinal Survey of Youth (NLSY); the NLSY’s Children and Young Adult Survey (which surveyed the biological children of women in the 1979 NLSY); the High School and Beyond Survey (1980); the National Education Longitudinal Study (1988); the Health & Retirement Study (1992); the Adolescent Health Survey (1994); the Medical Expenditure Panel Study (1996); the 1998 and 2010 cohorts of the Early Childhood Longitudinal Study; the 2002 and 2009 panels of Education Longitudinal Study; the New Immigrant Survey (2003); and the High School Longitudinal Study of 2009, just to name a few of the more prominent sources of longitudinal data ( Abraham and Nixon 2010).

As the foregoing examples indicate, since the 1960s, both cross-sectional and longitudinal surveys have risen in frequency and availability to become the most important source of data about social and economic conditions in the United States and the dynamics of change within American society, not to mention the bedrock of social science research and federal statistics. Thus, any threat to the viability of social surveys constitutes a serious threat to the validity not only of the social science research establishment, but to many federal statistical series. It is for these reasons that the rise in survey nonresponse is viewed with such alarm in professional quarters.

SURVEYS AND SOCIAL CHANGE

Surveys are social interactions, and, like all interactions between people, they are embedded within social structures and guided by shared cultural understandings. On any given survey, the interaction ultimately is between the researcher (the person or persons who designed the questionnaire) and the respondent (the individual who answers it); but in most cases the interaction is indirect, being mediated by an interviewer or undertaken by means of a printed document. Interviewer-mediated surveys may be done in person or over the phone. In the past, printed surveys were usually mailed out, filled in, and then mailed back; but in the present day written questionnaires are increasingly administered online. Online surveys eliminate the need for data entry, of course. Yet even interviewer-administered surveys now typically arrange for the interviewer to enter responses directly onto a laptop computer rather that writing them down, yielding what is known as a Computer Assisted Personal Interview (CAPI) or a Computer Assisted Telephone Interview (CATI).

Given that survey interactions are always embedded within social structures and guided by shared understandings, they are necessarily influenced by broader changes in social structure and values. As the examples of CAPI and CATI indicate, surveys are also affected by changes in technology. In recent years, both society and technology have tended to evolve in ways that are thought to make the administration of surveys more difficult (Tourangeau 2004). Although some social changes have improved the climate for survey research (greater individual openness, fewer personal inhibitions, a declining number of taboo subjects), other social changes discussed below have made surveys more difficult. Likewise, although some technological changes have made the job of the survey researcher easier (online questionnaires, rapid data processing, cheap data storage, CAPI and CATI), other technological trends also discussed below have made the researcher’s job more difficult. In general, the net effect of recent social and technological changes in developed societies has been to undermine the ability of researchers to conduct surveys, as evidenced by declining response rates.

The heyday of social surveys probably came in the 1970s and 1980s. By then, sampling theory and design had been fully worked out; experiments in question wording and advances in cognitive science were informing the design of questionnaires; and the advent of mainframe computers made the processing of large-scale data bases cheap and easy. More importantly, under the societal conditions that prevailed during that period, when interviewers called a household by phone or appeared at the front door to solicit respondents they were likely to find a cooperative household member at home, usually someone who was not only willing but eager to answer surveyor’s questions. In the decades since then, however, American society changed in ways that increasingly have made encountering cooperative and willing respondents more difficult.

Changing Family Structure

Household-based surveys are obviously facilitated to the extent that eligible respondents are found at home when an interviewer calls or stops by. The family pattern that prevailed in the United States during the years after the Second World War essentially maximized this likelihood. The return to full employment after the war and steadily rising earnings through the mid-1970s meant that most men had the economic means to support a family, leading to rising rates of marriage, falling ages of matrimony and childbearing, and a consequent surge in fertility that has become known as the Baby Boom (Croker and Dychtwald 2007). Older women who had postponed marriage and childbearing during the War and Depression sought to make up for lost time at the same time that younger women went right into early marriage and childbearing, leading to a “piling up” of births that from 1946 to 1966 that produced the largest series of birth cohorts in American history (Rindfuss and Sweet 1977).

During the Baby Boom, women generally left the labor force at the birth of the first child and remained at home until this child and the ones that followed had grown up, or at least had all entered school. As recently as 1970, two thirds of all households were composed of a married couple and 60% of these had children (Massey 2005). Those married couples without children were generally on their way to having them or had just finished raising them. Only 5% of households were headed by a single mother, and non-family households---those containing single individuals or unrelated persons---comprised just 18% of the total. If an interviewer stopped by at almost any hour of the day or evening, he or she would be quite likely to find someone at home. Women were present most of the day and most men came straight home to their families at the end of the workday.

After 1970 this stable pattern of social organization began to give way to a new diversity of household structures (Bianchi, Robinson, and Milkie 2006). Fertility rates fell, marriage was increasingly postponed, and ages of childbearing increased. More and more people lived outside of nuclear family households as single-person households, multi-person households consisting of unrelated individuals, and cohabiting couples proliferated, while family households more and more consisted of single parents. By 2000, less than half of all households consisted of a married couple and less than half of these contained children whereas one third of households were nonfamily (Massey 2005). At any given time of the day, households without children, single person households, and those with two or more unmarried adults are much more likely to be empty than a household occupied by a married family with children, making the contacting potential respondents and securing their cooperation that much more difficult.

Changing Work Habits

These shifts in American household structure were accompanied by equally significant changes in U.S. work habits. After 1970, male employment became more precarious as unemployment rates rose and wages stagnated while females poured into the labor force to bolster sagging household incomes, pursue independent careers, and ensure support for themselves during an era when the institution of marriage was seen as crumbling (McLanahan 2004). Among the employed, hours of work increased for both men and women and average wages lagged especially for men, meaning most people were working harder just to maintain their economic position. After 1996, poor single mothers were increasingly pushed into the labor force by welfare reform legislation (Grogger and Karoly 2005) and at the same time progressive suburbanization increased commuting times throughout the economy (Duany et al. 2000). As Americans spent more and more time commuting to and engaging in paid labor, and less time at home with children or on their own, the odds that an interviewer might find a respondent at home steadily fell. In addition, the increased burden of working and commuting may have sapped Americans’ willingness to be surveyed.

Mass Immigration and Rising Inequality

Accompanying the transformation of family and work in the United States was a return to mass immigration after a long hiatus that began in the 1920s. Mass immigration was a core feature of American life from the birth of the republic through the first decades of the 20th century and the inflow crested at around 1.2 million persons per year just prior to the First World War. Thereafter, it was curtailed first by the outbreak of hostilities, then the imposition of restrictive quotas, and finally by the Great Depression and the Second World War. Whereas Annual immigration averaged nearly 900,000 persons per year between 1900 and 1914, it averaged just 140,000 per year between 1930 and 1960 (Massey 1995). As a result, the United States that existed in the 1960s and 1970s was the whitest and most native country that had ever existed. In 1970s, the percentage foreign born fell below 5% for the first and only time in American history and African Americans constituted just 11% of the population, with Hispanics making up less than 5% and Asians a fraction of 1% of all U.S. residents (Massey 2007). If an interviewer phoned or knocked on a door, not only would he or she be likely to find someone home, that person would almost certainly be a native English speaker and 83% of the time he or she would be a white person of European origin.

Owing to changes in U.S. immigration policy and the globalization ofthe economy after 1970, mass immigration experienced a remarkable revival to transform quite radically the racial and ethnic composition of the United States (Massey and Taylor 2004). By the year 2000, 11% of all U.S. residents were foreign born, 14% were Hispanic, 13% were African American, and 5% were Asian (Massey 2007). Increasingly, interviewers calling on households in the United States are likely to encounter a non-English speaker. As of 2000, about one in five Americans spoke a language other than English at home, but in key gateway states the figure was much higher: around 40% in California, 31% in Texas, 28% in New York, 26% in New Jersey, 23% in Florida, and19% in Illinois (U.S. Bureau of the Census 2010). Although immigrants remain concentrated in these historical destination areas, during the 1990s immigration expanded geographically to become a nationwide phenomenon and the fastest growing foreign populations were in places such as Georgia, Iowa, North Carolina, and Minnesota (Massey 2008).

The racial and ethnic diversification of the United States through immigration and the fragmentation of the American family were also accompanied by a remarkable rise in income inequality (Piketty and Saez 2003). After 1970, incomes at the lower end of the distribution fell, those in the middle sectors stagnated, and only those in the top 20% rose. Moreover, the higher up the income distribution one looked, the greater the increase in income. Thus, the top 10% rose less than the top 5% which rose less than the top 1%, which rose less that the top 0.5% and so on. Between 1970 and 2008, total inequality rose by 19% to make the United States the most unequal country in the developed world (Massey 2007).

As the nation was become more segmented on the basis of race, ethnicity, and family status, in other words, it was also becoming vastly more unequal with respect to wealth and income, thus increasing the social distance between the typical interviewer, usually a middle class woman, and the average respondent. In the present day, not only is a respondent much more likely to speak a language other than English, but is also to be of a markedly different social status, while working more and spending less time at home.

Decline of Social Capital

The fragmentation of American society along the lines of race, class, and income has been associated with a growing ideological polarization in politics and an apparent decline in the willingness of people to contribute to the public good, a trend that Robert Putnam (2001) in his well-known book Bowling Alone identified as decline in social capital. Social capital refers to benefits and resources that originate in ties between people. Owing to many of the trends described above, distrust of government, public officials, and authority figures generally has risen in recent years, and the willingness of people to invest in public goods other than defense and indexed entitlements has generally eroded. Even if survey researchers do find eligible respondents at home, therefore, they are much less likely to find someone who is willing and cooperative, given the high level of distrust, cynicism, polarization that now prevails in American society (McCarty, Poole, and Rosenthal 2006).

One oft-cited reason for the decline in public trust is the rise in violent crime that occurred during the 1960s and 1970s (Simon 2009). However, the increase reached a plateau in the late 1980s and has steadily declined to reach record low levels in the first decade of the new century. According to the FBI’s Uniform Crime Reports, the violent crime rate dropped from a peak of 52.3 per thousand persons in 1981 to just 16.9 in 2009 (Federal Bureau of Investigation 2010). In other words, response rates and crime rates both declined together, so the rise in crime cannot account for the secular decline in response rates. Indeed, as shown in this volume, the association between crime and response rates is rather strongly positive. It may be that fear of crime keeps household residents huddled at home and therefore accessible to survey researchers, and that its decline undid this “hunkering effect.” Perceptions about crime, however, are quite different from actual trends in crime, as media coverage of violent crime has increased dramatically since the 1970s, leading people to believe that the United States is more dangerous than it actually is (Robinson 2011).

SURVEYS AND TECHNOLOGY

As already noted, surveys are not only affected by changes in society but also by changes in technology, which often serve to exacerbate, facilitate, and encourage broader social changes that originate in society more generally. Social fragmentation and class segmentation, for example, have been reinforcedby the demographic targeting of media made possible by the proliferation of cable channels and the use of information, technology to divide consumers on the basis of race, class, and place of residence (Weiss 1988). Increasingly Americans no longer consume a common popular culture, but a fragmented, variegated culture that is segmented by advertisers and programmers and specifically targeted to different “lifestyle clusters,” thereby accentuating difference rather than emphasizing commonality (Weiss 2000).

Private Security Systems

The decline of social capital has and public trust has gone hand-in-hand with technological changes in the security industry that enable people to separate themselves and their families from the rest of society and thus raise barriers to contact and communication from outsiders (Davis 2006). Alarm systems and surveillance cameras are increasingly placed around homes and in apartment buildings to monitor strangers, exclude outsiders, and generally block out unwanted intruders—including survey research interviewers. Well-to-do apartment dwellers have always had these functions performed by doormen and concierges, but close circuit camera and buzzer systems have made these services increasingly available to the denizens of less costly apartments. Moreover, it is not just buildings that are cordoned off from society, but entire communities that are now protected by walls and gates that are overseen by private security firms (Blakely and Snyder 1999). Making personal contact with a potential respondent now involves much more than simply walking up to a door and knocking.

Voice Mail and Caller ID

Just as new technologies have raised spatial barriers to survey researchers seeking to arrange an in-person interview, they have also raised social barriers to those seeking to undertake a telephone interview. Voice mail has been available for several decades to enable people to screen incoming calls---picking up or answering only those that deemed worthy of their time and attention while letting others dangle indefinitely without a personal response. More recently, the spread of caller ID has allowed phone users to see at a glance who is attempting to contact them and to avoid contact with unwanted, unfamiliar, or suspicious telephone numbers, again providing a convenient screening method to prevent interviewers from even having the opportunity to present a personal recruitment script.

Cell Phones

Access to filtering devices such as voicemail and caller ID has been greatly facilitated by the spread of cell phones and handheld internet devices. In 1990, there were only 21.1 cell phone users per 1000 persons; but by 2005 that number had grown to 683 and in 2009 it exceeded 900 (Mona 2010). Cell phone usage rates vary sharply by factors such as income, education, and especially age. According to a recent survey done by the Pew Research Center, 80% of persons 18-29 years old had a cellphone or some other equivalent handheld device, compared to just 66% of those aged 30-49, 42% of those 50-64, and 16% of those aged 65 or older. The degree of usage by education range from 41% among high school dropouts to 69% among college graduates. By income, the range went from 46% among those earning less than $30,000 per year to 76% among those earning $75,000 or more (Rainie 2010). As cell phones penetrated the population, moreover, use of land lines fell, slowly at first but dramatically after 2000. According to a survey done by the National Center for Health Statistics in 2009, 25% of U.S. households had no landline and another 15% had a landline but received most of their calls via wireless (Blumberg and Luke 2010).

A major innovation in survey research has been Random Digit Dialing (RDD), a method developed in the 1970s in which phone numbers are randomly generated within particular combinations of area codes and prefixes (Waksberg 1978). Many such numbers turn out not to be working numbers, but when a connection is made the call can be immediately dispatched to an interviewer standing by on the line. Since area codes and prefixes (the first three digits of a local number) historically have been geographically allocated, RDD was easily adapted to construct samples stratified by areas.

RDD dramatically reduced the cost of surveys and became the standard methodology in political polling, market research, and many academic studies. Not only was the method inexpensive, but it yielded highly representative surveys given that land lines were the primary means of phone communication and nearly all American households had a phone. Obviously given the spread of cell phones, these conditions no longer hold, and the density of land lines can only be expected to decline in future years. In theory, it is possible to apply RDD to cell phones, but unlike land lines where one can assume that respondents are comfortably situated in their own homes, this assumption does not hold for cell phones, which may be answered at work, while driving, while walking, while interacting with friends, or in almost any other situation. The researcher has little control over the context of recruitment or interview, and of course all cell phones include caller ID to enable call screening (whereas only some land lines have this feature). As a result, although it is technically possible to include cell phones in RDD surveys, it has proven to be quite costly to convert cell contacts into interviews and response rates remain quite low (Brick et al. 2006, 2007). In addition, because cell phone area codes are no longer necessarily tied to a specific geographic area (people often move and retain their original cell number), the suitability of RDD for samples of specific areas is also compromised.

Technology and Survey Fatigue

As RDD made polling for a variety of purposes cheaper and easier, the number of surveys expanded. In addition to the proliferation of very demanding academic and government surveys that increasingly require a sustained longitudinal commitment and a rising number of surveys by polling organizations, the use of surveys by television networks, newspapers, magazines, marketing firms, corporations, consumer consultants, and politicians has become increasingly common. Surveys are also increasingly used as teasers for fund raising, both by robocallers and direct marketing specialists hawking candidates, ideas, and products.

With more people working longer hours and experiencing the stress of a long commute, the only time interviewers can reasonably expect people to be home is the early evening, and phone solicitations increasingly have concentrated in the hours from 6-8, as well as on weekends. Various studies have demonstrated that these hours (along with weekends) are the most productive for surveys (e.g., Weeks, Kulka, and Pierson 1987). Although commercial solicitations were prohibited by the Telephone Consumer Protection Act of 1991, charitable requests, political petitions, and non-commercial solicitations continue to be permitted and continue to concentrate in the evening hours, right after people arrive home from work and precisely when they are least disposed to engage in an extended survey interaction. Although difficult to measure precisely, the proliferation of survey demands and other solicitations enabled by RDD and other technologies and their concentration in the few remaining hours of in-home leisure time, have been hypothesized to produce a kind of “survey fatigue” in which people are simply uninterested in responding to a survey, increasingly any survey under any conditions (Porter, Whitcomb, and Weitzer 2004).

THE FUTURE OF SOCIAL SURVEYS

It is never easy to predict what the future will bring but, in the case of social surveys, a few predictions seem relatively safe. Much of what we say here echoes Kreuter’s observations in this volume. First, the trends that we have described here and that our colleagues have documented in greater detail throughout this volume are unlikely to go away any time soon. Response rates are likely to continue to fall and survey costs are likely to continue to rise. The societal and technological trends that brought us to the current situation are not likely to reverse themselves quickly. Survey researchers will have to continue to try to adjust to these forces.

Second, we are likely to see serious efforts to overhaul one or more of the data series that are currently the mainstays of the federal statistical system. Many of the important surveys that are carried out regularly by the federal statistical agencies---the National Crime Victimization Survey, the American Community Survey, the Consumer Expenditure Survey, the Current Population Survey, the National Health Interview Survey, just to name a few---haven’t undergone major redesigns in decades; the rising costs of these surveys and the likelihood of budget cuts for their sponsoring agencies are likely to mean that one or more of these surveys will undergo thorough-going redesigns within the next ten years. There may, in addition, be renewed efforts to consolidate federal data collection into fewer surveys in an effort to save money and to reduce the burden on the public. The current model, featuring a large number of high quality but costly recurring face-to-face surveys, may soon become unsustainable.

A third trend that is likely to continue or accelerate is the reliance on non-survey data to assist in survey estimation and, in some cases, to supplant self-report data. Surveys may attempt to utilize paradata more effectively for adjustment or imputation purposes (for an extensive discussion, see Olson, this volume). Similarly, they may use interviewer observation data or administrative records in place of data obtained from sample members. At least one major national survey---the National Immunization Survey, sponsored by the Centers for Disease Control and Prevention and carried out by the National Opinion Research Center---already uses a combination of data obtained directly from the respondents and data obtained from their medical providers; other surveys also use combinations of survey reports and medical or administrative records. Czajka (this volume) and Smith and Kim (this volume) both offer discussions of the use external data for nonresponse adjustment and other survey purposes. More extensive use of paradata, interviewer observations, and data from administrative records may help reduce budget pressures on the survey organizations and may also reduce the level of survey fatigue among members of the general public. We are likely to see increasingly use of non-survey data for survey purposes in the future.

Although it seems unlikely that surveys will be ever be seen again in quite the idealistic light that Harris, Gallup, and Roper saw them in the 1940s and 1950s, the apparent success of advertising for the 2010 census suggests that, when surveys are presented in the right way, the public is still willing to provide the information that the government or other researchers need. Perhaps it is time for an industry-wide effort to improve the image of survey research and to differentiate legitimate social scientific surveys from the onslaught of unwanted solicitations that most Americans are trying so hard to fend off.

Contributor Information

Douglas S. Massey, Office of Population Research, Princeton University, Wallace Hall, Princeton, NJ 08544, 609 258 4949, dmassey@princeton.edu

Roger Tourangeau, Westat Corporation, 1600 Research Boulevard, Rockville, MD 20850, 301 294 2828, RogerTourangeau@westat.com.

References

  1. Abraham Katharine G, Nixon J Alice. Large national surveys as platforms for social science research; Background document prepared for the Workshop on the Future of Observatories in the Social Sciences; Alexandria, VA. December 17, 2010.National Science Foundation; 2010. [Google Scholar]
  2. Anderson Margo. The American Census: A Social History. New Haven, CT: Yale University Press; 1988. [Google Scholar]
  3. Bales Kevin. Charles Booth’s survey of Life and Labour of the People in London 1889-1903. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991. pp. 66–110. [Google Scholar]
  4. Bianchi Suzanne M, Robinson John P, Milkie Melissa A. Changing Rhythms of American Family Life. New York: Russell Sage Foundation; 2006. [Google Scholar]
  5. Blakely Edward J, Snyder Mary G. Fortress America: Gated Communities in the United States. Washington, DC: Brookings Institution; 1999. [Google Scholar]
  6. Blumberg Stephen J, Luke Julian V. Bethesda, MD: National Center for Health Statistics; 2010. [December 27, 2010]. Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December 2009. at http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201005.htm. [Google Scholar]
  7. Box-Steffensmeier Janet M, Jones Bradford S. Event History Modeling: A Guide for Social Scientists. New York: Cambridge University Press; 2004. [Google Scholar]
  8. Bregger John E. The Current Population Survey: A historical perspective and BLS’ Role. Monthly Labor Review. 1984;107(6):8–14. [Google Scholar]
  9. Brick J Michael, Brick Pat D, Dipko Sarah, Presser Stanley, Tucker Clyde, Yuan Yangyang. Cell phone survey feasibility in the U.S.: Sampling and calling cell numbers versus landline numbers. Public Opinion Quarterly. 2007;71(1):23–39. [Google Scholar]
  10. Brick J Michael, Dipko Sarah, Presser Stanley, Tucker Clyde, Yuan Yangyang. Nonresponse bias in a dual frame sample of cell and landline numbers. Public Opinion Quarterly. 2006;70(5):780–793. [Google Scholar]
  11. Bulmer Martin. W.E.B. DuBois as a social investigator: The Philadelphia Negro 1899. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991a. pp. 170–188. [Google Scholar]
  12. Bulmer Martin. The decline of the social survey movement and the rise of American empirical sociology. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991b. pp. 291–315. [Google Scholar]
  13. Bulmer Martin, Bales Kevin, Sklar Kathryn Kish. The social survey in historical perspective. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991. pp. 1–48. [Google Scholar]
  14. Converse Jean M. Survey Research in the UnitedStates: Roots and Emergence 1890-1960. New Brunswick, NJ: Transaction Publishers; 2009. [Google Scholar]
  15. Croker Richard, Dychtwald Ken. The Boomer Century, 1946-2046: How America’s Most Influential Generation Changed Everything. New York: Springboard Press; 2007. [Google Scholar]
  16. Davis Mike. City of Quartz: Excavating the Future in Los Angeles. New York: Verso; 2006. [Google Scholar]
  17. Duany Andres, Plater-Zyberk Elizabeth, Speck Jeff. Suburban Nation: The Rise of Sprawl and the Decline of the American Dream. New York: North Point Press; 2000. [Google Scholar]
  18. DuBois WEB. The Philadelphia Negro. New York: Cosimo Classics; 2010. [Google Scholar]
  19. Federal Bureau of Investigation. Washington, DC: U.S. Department of Justice; 2010. [December 28, 2010]. Uniform Crime Reports. at http://www.fbi.gov/about-us/cjis/ucr/ucr. [Google Scholar]
  20. Grogger Jeffrey, Karoly Lynn A. Welfare Reform: Effects of a Decade of Change. Cambridge, MA: Harvard University Press; 2005. [Google Scholar]
  21. Groves Robert M. The structure and activities of the U.S. statistical system: History and recurrent challenges. Annals of the American Academy of Political and Social Science. 2010;531:163–179. [Google Scholar]
  22. Igo Sarah E. The Averaged American: Surveys, Citizens, and the Making of a Mass Public. Cambridge, MA: Harvard University Press; 2007. [Google Scholar]
  23. Massey Douglas S. The new immigration and the meaning of ethnicity in the United States. Population and Development Review. 1995;21:631–52. [Google Scholar]
  24. Massey Douglas S. Strangers in a Strange Land: Humans in an Urbanizing World. New York: W.W Norton; 2005. [Google Scholar]
  25. Massey Douglas S. Categorically Unequal: The American Stratification System. New York: Russell Sage Foundation; 2007. [Google Scholar]
  26. Massey Douglas S. New Faces in New Places: The New Geography of American Immigration. New York: Russell Sage Foundation; 2008. [Google Scholar]
  27. Massey Douglas S, Taylor J Edward. International Migration: Prospects and Policies in a Global Market. Oxford: Oxford University Press; 2004. [Google Scholar]
  28. McCarty Nolan, Poole Keith T, Rosenthal Howard. Polarized America: The Dance of Ideology and Unequal Riches. Cambridge, MA: MIT Press; 2006. [Google Scholar]
  29. McLanahan Sara. Diverging destinies: How children are faring under the second demographic transition. Demography. 2004;41:607–27. doi: 10.1353/dem.2004.0033. [DOI] [PubMed] [Google Scholar]
  30. Mona Susan. A report on cell phone usage trends. [December 27, 2010];Ezine @rticles. 2010 at http://ezinearticles.com/?A-Report-on-Cell-Phone-Usage-Trends&id=1336333.
  31. Neyman Jerzy. On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. Journal of the Royal Statistical Society. 1934;97(4):558–625. [Google Scholar]
  32. Piketty Thomas, Saez Emmanuel. Income inequality in the United States, 1913-1998. Quarterly Journal of Economics. 2003;158:1–16. [Google Scholar]
  33. Porter Stephen R, Whitcomb Michael, Weitzer William. Multiple surveys of students and survey fatigue. New Directions for Institutional Research. 2004;121:63–73. [Google Scholar]
  34. Prewitt Kenneth. Science starts not after measurement but with measurement. Annals of the American Academy of Political and Social Science. 2010;631:7–17. [Google Scholar]
  35. Putnam Robert D. Bowling Alone The Collapse and Revival of American Community. New York: Simon and Schuster; 2001. [Google Scholar]
  36. Rainie Lee. Internet, broadband, and cell phone statistics. Washington, D.C: Pew Research Center; 2010. [December 27, 2010]. at http://www.pewinternet.org/Reports/2010/Internet-broadband-and-cell-phone-statistics.aspx. [Google Scholar]
  37. Rindfuss Ronald A, Sweet James A. Postwar Fertility Trends and Differentials in the United States. New York: Academic Press; 1977. [Google Scholar]
  38. Robinson Matthew B. Media Coverage of Crime and Criminal Justice. Durham, NC: Carolina Academic Press; 2011. [Google Scholar]
  39. Simon Jonathan. Governing Through Crime: How the War on Crime Transformed American Democracy and Created a Culture of Fear. New York: Oxford University Press; 2009. [Google Scholar]
  40. Sklar Kathryn Kish. Hull House Maps and Papers: social science as women’s work in the 1890s. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991. pp. 111–147. [Google Scholar]
  41. Tourangeau Roger. Survey research and societal change. Annual Review of Psychology. 2004;55:775–801. doi: 10.1146/annurev.psych.55.090902.142040. [DOI] [PubMed] [Google Scholar]
  42. U.S. Bureau of the Census. American Factfinder. Washington, D.C: U.S. Bureau of the Census; 2010. [December 29, 2010]. at http://factfinder.census.gov/home/saff/main.html?_lang=en. [Google Scholar]
  43. Waksberg Joseph. Sampling methods for random digit dialing. Journal of the American Statistical Association. 1978;73(1):40–46. [Google Scholar]
  44. Weeks Michael F, Kulka Richard A, Pierson Stephanie A. Optimal call scheduling for a telephone survey. Public Opinion Quarterly. 1987;51(4):540–549. [Google Scholar]
  45. Weiss Michael J. The Clustering of America. New York: Harper & Row; 1988. [Google Scholar]
  46. Weiss Michael J. The Clustered World: How We Live, What We Buy, and What It All Means About Who We Are. New York: Little, Brown & Co; 2000. [Google Scholar]
  47. Yeo Eileen Janes. The social survey in social perspective 1830-1930. In: Bulmer Martin, Bales Kevin, Sklar Kathryn Kish., editors. The Social Survey in Historical Perspective 1880-1940. Cambridge, UK: Cambridge University Press; 1991. pp. 49–65. [Google Scholar]

RESOURCES