Abstract
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified.
1. Introduction
For the past 50 years, social and epidemiologic surveys have been employed to estimate and track the substance use patterns of representative samples of both adolescents and adults in the United States and other countries. Although many of these surveys are of exceptional quality and rigor (e.g., the Monitoring, the Future Survey, the National Survey of Drug Use and Health, and The Youth Risk Behavior Survey), for almost as long, there have been methodological criticisms and skepticism regarding their ability to accurately portray the behaviors they seek to measure [1–7]. Addressing these questions is important given the lack of alternative methodologies for efficiently monitoring substance use behavior within large national and subnational populations. The goal of this paper is to review and summarize the available empirical evidence addressing these questions, to identify gaps in our knowledge base regarding this issue, and to make some recommendations for needed future research to address those knowledge gaps.
2. The Total Survey Error Model
A useful framework for conceptualizing error in substance use surveys is the total survey error (TSE) model. The TSE model first delineated by Groves [8] focused on sampling, coverage, nonresponse, and measurement errors in surveys. This model successfully organized decades of empirical research within a single unifying theoretical framework. An expanded elaboration of the TSE model has been more recently presented by Lavrakas [9], in which he identifies two general classes of errors, measurement and representation, and then explores multiple subclasses of errors within each. Table 1 lists the various elements of the Lavrakas TSE model.
Table 1.
Errors of representation | Errors of measurement |
---|---|
Coverage errors | Specification errors |
Sampling errors | Measurement errors |
Nonresponse errors | Processing errors |
Adjustment errors | Inferential errors |
Briefly, errors of representation are those concerned with technical problems that may impede a survey's ability to accurately mirror the population that the survey seeks to represent. These include failure to use sample frames that provide adequate coverage of the population being studied (coverage errors), imprecision in the sample(s) drawn from a sample frame (sampling error), errors associated with failure to contact or complete interviews with all sampled respondents, and failure to obtain answers to all questions included in a survey instrument (nonresponse errors), as well as failure to make adequate adjustments for complex sample designs and survey nonresponse (adjustment errors).
In contrast, errors of measurement involve failures to adequately assess the variables of interest in a survey. These include specification errors, which involve failures to correctly conceptualize survey constructs, and measurement errors, which include factors external to the construct being measured that nonetheless influence measurement quality. Processing errors are defects in the construction of survey data sets and/or final analytic variables and inferential errors which involve difficulties or failures when making adequate sense of the final survey data. The following two sections organize the empirical literature concerned with errors in substance use and misuse surveys within this TSE framework. Each of these error sources, of course, is broadly relevant to health survey research in general. Our goal here is to review their relevance specifically to substance use surveys.
3. Errors of Representation
3.1. Coverage Errors
Errors in coverage are generally a consequence of employing a survey sampling frame that does not include all individuals in the population being studied, or, alternatively, by employing methods that do not provide all members of the population of interest some probability of being sampled. As with all other elements of the TSE framework, this type of error is not unique to substance use surveys. Nonetheless, because likelihood of falling into a potential sample frame may in some cases be associated with substance use behaviors, substance use research may be uniquely vulnerable to coverage error.
In most community epidemiological surveys, there are many social groups that may be systematically excluded from commonly available sample frames. Some of these groups include homeless persons, individuals currently hospitalized, college students living in dormitories, persons incarcerated in the criminal justice system, and members of the military living on military bases. Substance use may be particularly high within some of these nonresidential populations [10, 11]. Weisner and colleagues [7] investigated this problem by comparing prevalence estimates from a general population community survey with data obtained from interviews with nonhousehold populations found in several inpatient and outpatient settings, such as alcohol, drug or mental health treatment, criminal justice, and/or welfare services. Not surprisingly, substance use was much more common among persons in these settings. For example, 11.3% of the household sample was defined as problem drinkers, compared to 43.1% of those found in nonhousehold agency settings. The disparities were even greater for indicators of weekly drug use (5.5% in the household sample versus 36.5% in the agency sample) and both problem drinking and weekly drug use combined (2.2% in household sample versus 18.7% in agency sample). Other research provides similar evidence of increased substance use and misuse among persons less likely to be sampled within single family households as part of community-based epidemiologic surveys [12].
There is also some evidence in the US that failure to incorporate cell-phone-only households into random digit dialed (RDD) telephone samples can lead to underrepresentation of young adults who are at higher risk for substance use behaviors. Delnevo et al. [13] found significantly decreased measures of binge drinking and heavy alcohol consumption between 2001–2003 and 2003–2005 in the national Behavioral Risk Factor Surveillance System (BRFSS) telephone surveys. Other research, employing the US National Health Interview Survey, which relies on face-to-face interviews, has demonstrated that adults in cell-phone-only households are more likely to report past year binge drinking behavior (37.6%), compared to those residing in households with landlines (18.0%), and to those in households with no telephone service (23.0%; Blumberg et al. [14]). The effects of excluding cell phone-only households from survey estimates of binge drinking are particularly serious for young adults (aged 18–29 years) and low income persons [15]. Similar relationships between type of phone subscribership and substance use reports have been identified in Australia [16] and in other US studies [17]. As rates of cell-phone-only residences continue to grow [18], the coverage error associated with excluding them from telephone samples will only increase, and it will become increasingly difficult to produce credible prevalence estimates using traditional landline-only sample frames.
School-based surveys are also subject to coverage errors, as substance use rates have been shown to be higher among adolescents who drop out of school [19, 20]. Hence, surveys of adolescents that are school based often underestimate substance use within this population, although it is important to acknowledge that many school-based surveys attempt to make no generalizations to nonschool populations. A recent analysis by Gfroerer and colleagues [21] using pooled data from the 2002–2008 NSDUH (National Survey of Drug Use and Health, previously known as the National Household Survey of Drug Abuse or NHSDA) surveys reported that substance use estimates were higher for most substances among school dropouts, compared to same-aged students. The effects of dropouts on overall estimates increased from the 8th to the 12th grades, as the numbers of dropouts increased. At the 12th grade level, they found that failure to account for dropouts would miss more than half of past year cocaine users, more than half of all lifetime Ecstasy users, 30% of current binge alcohol users, and 25% of current alcohol users.
Because school absenteeism is also known to be associated with increased substance use [22–25], Gfroerer and colleagues [21] additionally investigated the effects of school absenteeism on substance use prevalence estimates in the NSDUH. They reported that those students who missed more days of school were also more likely to be current alcohol users, binge drinkers, and marijuana users. In recognition of this problem, some surveys, such as the YRBS, conduct “make-up” sessions to maximize student opportunities to participate and minimize coverage errors.
3.2. Sampling Errors
Both probability and nonprobability sampling methods are commonly applied in substance use surveys. When probability sampling strategies are employed, all elements within the sample frame have a known, albeit not necessarily equal, probability of selection. The precision of survey statistics derived from such samples can be calculated with a good degree of confidence and used to estimate the sampling error associated with those statistics. All other things being equal, the size of a random survey sample is inversely associated with the degree of potential sampling error associated with it. The precision of survey estimates also decreases as probability samples deviate from simple random sampling designs, a commonplace occurrence designed to reduce survey costs. Of all the sources of total survey error, the sampling errors related to probability-based sample designs are probably the most well understood, and definable, in practice.
Nonprobability samples are commonly used when research questions focus on special populations believed to be at increased risk for substance use and misuse. There are a variety of well-known nonprobability, or convenience, sample designs commonly used in practice. One of the more popular approaches currently is known as respondent driven sampling (RDS), which was developed by Heckathorn [26, 27] and which has been used in numerous substance use studies [28–30]. Other popular nonprobability strategies in substance use research include venue and facility-based sampling [31–34], snowball sampling [35, 36], time-space sampling [37–39], and advertising for volunteers [40–42]. An important advantage of these designs is their cost effectiveness when researching rare or hidden populations, such as illicit drug users. Because probabilities of selection are unknown, however, there are no definable sampling errors associated with these designs. Rather, nonprobability based sample designs typically suffer from large coverage errors and sampling errors are completely unknown.
3.3. Nonresponse Errors
It is common knowledge that unit response rates in general population surveys have been declining for some time [43–45]. Survey response rates have been historically employed as a proxy indicator of survey quality in general and nonresponse error in particular [46]. Recent research, though, has demonstrated that response rates per se are not necessarily associated with nonresponse bias [47–49]. Rather, it is the degree to which survey respondents and nonrespondents differ from one another in terms of variables of interest to the survey, combined with the survey's response rate that defines nonresponse bias. A British study reported by Plant et al. [50], for example, compared two sets of survey data, with 25% and 79% response rates, respectively. No important differences in self-reports of alcohol consumption were found between the two.
When considering substance use behaviors, there are reasons to be concerned about differences between survey respondents and nonrespondents. Pernanen [4] many years ago suggested that persons who drank heavily might be more difficult to contact as part of survey efforts and would be less likely to cooperate when contacted. In a Canadian survey, De Lint [11] reported that more in-person contact attempts were required to interview those respondents who reported greater numbers of purchases of alcoholic beverages. Cottler et al. [51] additionally reported that those respondents diagnosed with alcohol abuse and dependence required greater numbers of contact attempts in order to complete interviews. Crawford [10] also reported more alcohol consumption among those respondents most difficult to contact. Using a population register in Sweden, Tibblin [52] found higher rates of survey nonparticipation among middle aged men who were known to have experienced alcohol related problems. There is also some general evidence that survey nonresponse is greater among persons with poor health [53, 54]. A Swedish study has reported that survey respondents were less likely to have been hospitalized with alcohol diagnoses, compared to nonrespondents [55]. These findings are generally interpreted as evidence that heavy drinking may be a barrier to participation in social surveys due to difficulty in making contact and also in convincing those individuals who are contacted to agree to participate [56]. Other investigations, though, have reported no differences in alcohol use between those who do and do not participate in epidemiologic surveys [57–60], and alcohol abstainers have also been found to be underrepresented [56].
It should also be noted that standard field procedures in many surveys actually exclude active substance users from participation. Much research explicitly requires interviewers not to conduct interviews with individuals who are visibly intoxicated or appear high on other substances. Kish [61] commented on this problem nearly 50 years ago, referencing a case in which a respondent was drunk by the time they came home after work every day throughout a survey's field period. While such protocols are necessary for orderly data collection and are invoked only infrequently in practice, the potential effects of such protocols on nonresponse bias must nonetheless be considered. In addition, despite some claims to the contrary [62], knowledge that a survey is concerned with substance use appears to have no effect on respondent willingness to participate [55, 63].
Other relevant information comes from studies of attrition in panel surveys, in which the same respondents are interviewed at multiple time points. A number of such investigations have documented higher levels of attrition among high alcohol and drug users [64–73]. In contrast, some other research has found higher attrition among nonusers [74], and Thygesen and colleagues [75] have found both high alcohol intake and abstinence to be associated with increased likelihood of panel attrition. In their study, attrition was also found to be predictive of increased mortality from alcoholic liver cirrhosis and alcoholic liver diseases. Still other research has found no differences between those who do and do not drop out of panel studies [76].
Evidence from research specifically designed to assess nonresponse bias is also informative. Several types of nonresponse bias studies are routinely conducted. One type is known as follow-up surveys, which typically involve attempting to obtain survey data from nonrespondents to the primary survey [77]. Caspar [57], for example, conducted follow-up face-to-face interviews with a sample of nonrespondents to the 1990 NHSDA, concluding that initial nonrespondents were more likely to report lifetime drug use. Lahaut et al. [56] provide an example of a nonresponse follow-up survey with individuals who initially did not respond to a mail survey and who were subsequently visited by interviewers to complete a face-to-face interview. These analyses suggested that abstainers were underrepresented in the initial survey. Hill et al. [78] report a telephone follow-up survey of nonrespondents to a primary mail survey. They also found lower reporting of unsafe alcohol consumption among initial nonresponders. Lemmens et al. [60] conducted a telephone follow-up survey of nonrespondents to a face-to-face survey, concluding that there were only small effects of nonresponse on self-reporting of alcohol consumption. An important potential limitation when interpreting findings from follow-up surveys such as these is the use of different modes of data collection between the primary survey and the follow-up effort. Given what is known about mode differences in reporting of substance use behaviors (see Section 4.2), it would not be surprising that a telephone follow-up to a self-administered survey might suggest that the initial survey overestimated substance use, whereas a self-administered nonresponse follow-up survey to an initial interviewer-assisted effort might suggest that it had underestimated substance use. In each case, the effects being attributed to nonresponse bias may actually be a consequence of mode differences rather than systematic nonresponse. Indeed, there are several examples in the literature of surveys that relied on interviewer-assisted follow-up interviews (cf., Hill et al. [78]; Lahaut et al. [56]) that produced data suggesting that primary survey respondents overreport substance use behaviors.
Examples of other types of nonresponse bias analyses that focus on respondent substance use patterns include studies that compare early versus late respondents [56, 79–81]. An example is a study reported by Zhao et al. [62], who compared the answers of persons responding early and late to the Canadian Addictions Survey. Respondents were more likely to have higher incomes and to be educated, males, young adults, and substance users. Such studies employ a continuum of resistance framework that assumes that respondents who require greater effort to contact and interview are more similar to nonrespondents than are those who initially agree to survey requests [82]. Other strategies compare estimates from multiple surveys [62], compare frame data for respondents, nonrespondents, and the full sample [60], or compare estimates from surveys that have high versus low response rates [50].
Another useful strategy for assessing nonresponse bias is to supplement survey data with information obtained from other sources, such as administrative records. For example, Gfroerer et al. [83] examined response patterns in the 1990 National Household Survey on Drug Abuse by merging survey findings with records from the 1990 Decennial Census. Of course, this required special authorization from the government, given the strict data protections associated with the census. They found that persons with some characteristics known to be associated with substance use (i.e., living in urban areas, being male) had lower response rates and that persons with other characteristics believed to be associated with nonsubstance use (older age and higher income levels) also had lower response rates and concluded that these various nonresponse correlates would likely cancel out much of the bias either set might have introduced into the survey estimates.
Finally, it is also important to recognize that high nonresponse rates to individual survey questions (a.k.a., item nonresponse) may also be an indicator of data quality problems in substance use surveys. Some research suggests demographic variability in nonresponse rates to substance use questions. Owens et al. [84] found that African Americans and persons who were separated or divorced were less likely, and females and persons aged 55 and older were more likely, to answer questions concerned with their use illicit drugs. Increased item nonresponse rates to substance use questions among minority groups have also been reported by Witt et al. [85], although Aquilino [86] reported no differences. An item nonresponse study of adolescents additionally found higher nonresponse rates to questions concerned with alcohol and marijuana use among male, compared to female respondents [87].
3.4. Adjustment Errors
Errors of adjustment involve failures to account for the potential effects that a survey's sample design and execution may have upon empirical findings. These may include instances in which sample weights fail to incorporate all sample design and/or nonresponse factors, when variances are unadjusted for the clustering of respondents within sampled geographic areas, or when the available sample weights are not correctly used. An unfortunate example of the failure to properly employ sample weights occurred about a decade ago when a report concerned with illegal sales of alcohol to underage minors in the US seriously overestimated the proportion of all alcohol sales that were reportedly being made to underage youth. The researchers were conducting a secondary analysis of a public release version of the 1998 NHSDA and failed to weigh their data for the survey's stratified sample design, in which young persons aged 12–20 were significantly oversampled. Because only persons under the age of 21 purchase alcohol illegally in the US, their overrepresentation in the unweighted NHSDA data file led to an overrepresentation of illegal sales in those data. This was an error that could have easily been avoided through the use of the preexisting sample weights. The erroneous findings, which were reported nationally, were quickly exposed as flawed [88].
Failure to employ nonresponse weights when survey response rates across different demographic subgroups vary considerably and those same variables which are correlated with substance use patterns can also result in biased substance use estimates. In addition, adjustment errors associated with clustered sample designs (when clustering is not taken into account) can lead to survey estimates with artificially small standard errors that can be misinterpreted as being overly precise [89]. In general, avoidance of adjustment errors would seem to require analysts who possess both substantive knowledge of the addiction processes being examined and methodological knowledge and expertise regarding complex sample design and analysis procedures.
4. Errors of Measurement
4.1. Specification Errors
When survey questionnaires do not correctly conceptualize and/or operationalize constructs of interest, they are understood to have specification errors. These can take several forms. For example, the street terminology used by drug users is often unique, constantly changing, and varies across locations. Not surprisingly, research demonstrates that the drug names employed in survey questionnaires are not always consistent with the names employed by users in the community [90, 91]. The continued introduction of new substances of course also contributes to specification errors.
In order to adequately assess substance use, it is necessary to ask respondents about all the forms of alcohol and/or drug they may have consumed. Hence, survey questions intended to measure any alcohol or drug use must be able to capture experience with each form of these substances. Global questions that ask about use of the substances in general can be expected to miss some experiences with less common varieties of each. Although these points may seem obvious, they can lead to specification errors more often than most researchers would prefer to admit. Avoiding specification errors requires careful attention during the instrument design process to the specific goals for which the survey is intended to be used.
4.2. Measurement Errors
Measurement error occurs when survey questions fail to measure what they were designed to measure. There are several potential sources of measurement error which must be considered when constructing a survey instrument or analyzing survey data. Broadly speaking, these include design effects, respondent effects, interviewer effects, and context effects.
4.2.1. Design Effects
Virtually every element of a survey that is exposed to respondents is likely to provide them with cues regarding the information being sought [92]. Although many if not most of these cues are unintentional from the researcher's perspective, they can nonetheless be expected to influence self-reports in ways that cannot always be anticipated or controlled. We refer to these as design related errors. Some important design issues discussed below include methods for asking about substance use, mode effects, use of skip patterns, and reference periods. Other design factors that may influence measurement quality include how clearly a survey is introduced as being concerned with substance use, the survey's sponsor, the procedures employed to obtain respondent informed consent, the use of incentives, and the survey's focus as either primarily concerned with substance use or concerned with a more broad set of topics [21]. Regarding this later point, it has been suggested that survey respondents are more willing to discuss negative personal behaviors when they are also asked to report about positive personal behaviors and characteristics [93].
Methods for Asking about Substance Use. Of course, the wording and structure of survey questions can be expected to have a strong influence on the answers obtained, and experimental comparisons have revealed differences in the magnitude of substance use reports obtained using various question measurement strategies. Kroutil et al. [94], for example, have documented the fact that open-ended questions seriously underestimate drug use prevalence rates. Other research has compared methods for measuring alcohol consumption. Rehm et al. [95] has reported findings from a within-subjects experiment that documents consistently higher prevalence rates for several indicators of harmful drinking when graduated-frequency measures [96] are used, in comparison to the more commonly employed quantity-frequency question response format [97, 98], and weekly drinking recall questions [95]. Other studies have also found graduated-frequency measures to produce higher estimates of alcohol use in comparison to quantity-frequency measures [99, 100]. The superior performance of the graduated-frequency format appears to be based on its ability to more precisely measure irregularly high levels of consumption, although there is some evidence suggesting that the graduated-frequency approach may actually overestimate consumption [100, 101]. Other less commonly used measurement strategies, such as the yesterday (or recent recall) method of reporting in which respondents are asked to report on their alcohol use during the previous day only, have been found to produce higher estimates than either the quantity-frequency or graduated-frequency measures [102]. The use of a daily diary protocol for collection of alcohol consumption is frequently considered to be a “gold standard” measurement approach [100, 103], but not very practical for most survey applications.
The design of response categories for use in quantity and frequency questions can also influence respondent self-reports. For example, Schwarz [104] has shown how simple changes in the sets of response options presented to respondents, such as emphasizing low versus high frequency events or behaviors, can have effects on overall response patterns. Indeed, Poikolainen and Kärkkäinen [105] have reported obtaining higher alcohol consumption reports when employing quantity and frequency questions that include more heavier intake response options.
It is somewhat ironic that quantity-frequency measures remain commonly utilized in practice, despite the fact that it is conventional wisdom among most substance use researchers that alcohol and drug consumption behaviors are far more variable across even brief time intervals than are assumed by these questions [92, 106]. By their very nature, quantity-frequency items ask for average amounts of use, essentially insuring that they will not capture episodes of heavy or binge drinking. Hasin and Carpenter [107] have documented in a community sample that as many as 30 percent of all respondents report having difficulty when answering typical survey questions concerned with usual drinking patterns due to changes in their drinking behavior during the time period in question and that this problem was particularly acute for persons with symptoms of alcohol dependence. The key advantages of the quantity-frequency measures that make them continue to be popular are their simplicity, ease of answering, and the relatively small amount of space they require in survey instruments. L. C. Sobell and M. B. Sobell [98] and Bloomfield et al. [101] provide comprehensive overviews of the strengths and limitations of various approaches to measuring alcohol consumption in survey questionnaires.
Substance Use Reference Periods. Various reference periods are used to restrict and specify the time intervals for which respondents are asked to retrospectively report their substance use activities. Most often used in practice are 30-day and 12-month reference periods, although there are many variations. Each has its own advantages and disadvantages. It is common knowledge that recall accuracy decays with increasing length of these time intervals [108], as research suggests that greater alcohol prevalence is obtained when shorter reference periods are employed in survey questions [109, 110]. Although more susceptible to recall concerns, a 12-month recall period would have the advantage of being less affected by seasonal variations in substance use [92, 111]. A 30-day reference period, in contrast, might be less likely to capture binge drinking episodes. Hence, some surveys may ask questions about multiple reference periods in order to address the limitations of each.
Also problematic are questions concerned with age of initiation of alcohol and other drug use. Age of first substance use, of course, is considered an important risk factor for subsequent substance abuse, and accurate measurement is hence important [112]. Unfortunately, the length of recall necessary to correctly answer this question can be problematic for many respondents. Forward telescoping, in particular, when respondents underestimate the length of time since an event took place, is an important threat to the quality of self-reports of age of first use [113]. Numerous studies have documented problems with accurate recall of this information [64, 114–121].
Questionnaire Skip Patterns. A common issue when designing substance use questionnaires is the question of whether it is best to employ skip patterns, which allow respondents to avoid answering follow-up questions that are clearly not applicable to them or to instead require all respondents to provide all answers to all items. The rationale for requiring responses to all items is twofold. First, there may be privacy concerns associated with the use of skip patterns, as those who report substance use will require more time to complete all follow-up questions, presumably allowing interviewers and/or other observers to conclude that they are in fact substance users. Second, although it is somewhat burdensome for respondents, it is likely that the presence of skip patterns will be quickly detected by many respondents and possibly motivate some to provide negative answers to filter questions in order to “skip out” longer blocks of questions that request details regarding substance use experiences. As an example of a skip pattern, a question that asks respondents if they have ever used marijuana might be employed as a filter item. Those respondents indicating that they had used marijuana would then be eligible to answer a series of follow-up questions that queried about frequency of use, age of initiation, and so forth. In contrast, avoidance of skip patterns would require respondents to answer all follow-up questions, typically by selecting a “have never used marijuana” response option, which would be available for use with each follow-up question. Such an approach can considerably increase the burden and amount of time necessary to complete a questionnaire for nonusers of the substances being examined. The NSDUH has historically not employed skip patterns. An experiment reported by Gfroerer et al. [83] investigated the effects of using skip patterns as part of the NHSDA. In their random experiment, they found significantly lower prevalence rates for the five illicit drugs examined when skip patterns were employed. Because no differences were found in alcohol use estimates, it was concluded that privacy concerns associated with answering the most sensitive questions was a more likely explanation for the findings.
Mode Effects. Survey data can be collected using a variety of modalities, including self-administered paper-and-pencil or electronic questionnaires, and telephone or in-person interviews. The presence of mode effects in surveys is well recognized, and there is now a considerable body of evidence documenting the effects of mode on the quality of self-reports of substance use behaviors. In general, survey modes that rely on respondent self-administration are found to obtain greater reports of alcohol and drug use than do those modes that require interviewers to directly ask about use of these substances [55, 58, 122–130]. There is additionally some evidence that these mode effects are greater for more sensitive illicit substances, such as cocaine and marijuana, compared to alcohol use [131].
Among self-administered modes, audio-computer-assisted-self-interviews (ACASI) appear to generate higher reporting of substance use behaviors than do paper-and-pencil (PAPI) self-administered answer sheets [132, 133]. Computer-assisted questionnaires produce data that is more internally consistent and more complete, helping to reduce the need for editing, imputation, and other processing activities that may lead to processing errors (see Section 4.3) [133]. Research has also begun to explore the reliability and validity of substance use surveys conducted via the internet. Eaton and colleagues [134] randomly assigned classes of high school students to respond to PAPI or web questionnaires, concluding that there were few differences in prevalence estimates obtained across the two modes. Ramo and colleagues [135] examined the quality of self-reported marijuana use in a convenience sample of young adults who completed a web-based questionnaire, concluding that such data can be reliably collected. Bauermeister et al. [28] have reported on the use of respondent driven sampling to more systematically sample young adults to participate in a substance use survey. Other investigators have compared internet reporting of alcohol use with reports obtained from self-administered mail questionnaires and both face-to-face and telephone interviews, concluding that online reports have similar levels of measurement quality [136–138].
Among interviewer assisted modes, some evidence suggests that face-to-face interviews appear to produce greater reports than do telephone interviews [86, 123, 139, 140], other evidence suggests no differences in substance use estimates between these two interviewer assisted modes [141, 142], and one study suggests that higher rates of some alcohol-related measures can be obtained by telephone [143]. Some research has also investigated the use of interactive voice recording (IVR) systems (a.k.a., “T-ACASI”—telephone audio computer-assisted self-interviewing) to improve the quality of substance use data collected by phone [144, 145].
4.2.2. Respondent Effects
Survey respondents vary considerably in their abilities and willingness to provide accurate answers to questions regarding substance use behaviors. Respondent behaviors can be understood within the framework of the generally accepted cognitive model of survey response [146], which recognizes four basic tasks required from respondents when they are answering each survey question. These include (a) question interpretation, (b) memory retrieval, (c) judgment formation, and (d) response editing. This is a useful model for understanding how variability across respondents may influence the quality of self-reported substance use information. Evidence regarding how three of these information processing tasks may influence the quality of substance use behavior reporting is reviewed below.
Question Interpretation. Because respondents sometimes employ substance use terminology that differs from that employed in research questionnaires [91, 147], the risk of miscommunication may be greater in substance use surveys, compared to other topics. The complexity of some substance use terminology may also sometimes lead to respondent confusion. This may be of particular concern in surveys of adolescents, who may not always have sufficient knowledge to correctly respond to questions regarding the use of various drugs [147–149]. Johnston and O'Malley [150] have presented evidence suggesting that respondents sometimes are more likely to deny, or recant, ever having used certain substances that they had previously reported having used (see also additional discussion of recanting in section below on Response Editing). Of particular relevance here is their finding that recanting varies by type of drug being asked about, with the recanting of tranquilizers and barbiturates found to be greater than that for marijuana and cocaine, a finding that they suggest to be related to the complexity of the definitions of these two substances, relative to marijuana and cocaine definitions, which of course also have some complexity. In alcohol research, recent reviews have found that respondents commonly misinterpret standard drink sizes, suggesting that alcohol intake may be systematically underestimated in survey research [151, 152].
A related concern is the degree to which respondent cultural background may influence the interpretation and/or comprehension of survey questions. Substance use patterns and practices are known to vary cross-culturally [153–155], and those varied experiences and beliefs regarding substance use can also be expected to influence respondent knowledge and familiarity with the topic in general and related terminology in particular. Experienced researchers, of course, recognize the importance of investigating and addressing these potential problems by employing focus groups, cognitive interviews, and ethnographic methods during survey development (c.f., Gardner and Tang [156]; Midanik and Hines [157]; Ridolfo [158]; and Thrasher et al. [159]).
Memory Retrieval. The accuracy of respondent recall has been the focus of much attention among methodologists [160, 161] and has been historically considered one of the more common explanations for inaccurate reporting of substance use behaviors [4, 120, 121]. Indeed, when answering survey questions concerned with substance use, the retrieval of the memories necessary to report accurately can be particularly difficult for several reasons. Poorly worded survey questions may present respondents with difficult cognitive challenges in terms of the effort necessary to retrospectively retrieve specific and/or detailed information that may not be readily accessible in memory [81]. There is also evidence that heavy drinking [4, 162], cocaine [163, 164], and MDMA use [165–167] may be associated with impaired memory. Mensch and Kandel [168] have found inconsistent reporting of marijuana use to be associated with degree of drug use frequency, with the more involved users providing less consistent survey responses, a finding they associate with faulty memory. Although considerable research has been invested in experimenting with strategies for aiding respondents with memory retrieval in general [169, 170], few efforts have focused on aiding recall of substance use information. Hubbard [171], however, has reported a series of experiments that used anchoring manipulations to improve respondent recall, although these were not found to be very effective.
Response Editing. Once respondents have successfully interpreted a survey question and retrieved the relevant information necessary to form an answer, they must decide whether that answer is to be accurately shared with the researcher. Given the illicit and sometimes stigmatizing nature of substance use behaviors, conventional wisdom often suggests that some respondents will make conscious decisions to underreport, or deny altogether, any such behavior [4]. That survey respondents will sometimes attempt to present themselves in a favorable, albeit not completely accurate, light during survey interviews is well understood and is commonly referred to as social desirability bias. Concerns about the potential effects of social desirability bias have been the subject of considerable research in the survey methodology literature [172–175]. In general, respondents are known to overreport socially desirable behaviors, such as voting [176] and exercise [177], and underreport socially undesirable behaviors, including drug and alcohol use [178]. Bradburn and Sudman [172] have explored and documented the sensitive nature of substance use questions by asking a national sample of respondents in the US how uneasy discussing various potentially sensitive topics would make them feel. They found that 42.0 percent reported that they believed most respondents would be “very uneasy” discussing their use of marijuana and that 31.3 and 29.0 percent, respectively, would also be uneasy discussing stimulant and depressant use, and intoxication. Only 10.3 percent indicated that they believed most people would be uneasy discussing drinking in general. This survey, though, was conducted more than 30 years ago and it is unclear to what degree these topics would elicit similar feelings of discomfort today.
Respondents may be uneasy discussing their substance use for several reasons, including the need to avoid the social threat and feelings of shame and embarrassment associated with violating social norms [179, 180]. Reporting illicit substance use may also be viewed by some respondents as a sign of weakness and, hence, something not to disclose [181]. These points are consistent with research findings that indicate that substance use underreporting increases with the perceived stigma of the substance being discussed [182–184]. Respondents may also elect not to admit to substance use behaviors in order to avoid potential legal sanctions, out of fear that a breach of confidentiality might risk their employment or reputation, and/or because they believe that such information is highly personal and not to be shared. Some research suggests that questions about current use of illicit substances are more likely to produce underestimates when confidentiality is less certain, compared to questions concerned with past use [185]. Experimental studies that have compared substance use reporting patterns when provided with assurance of anonymity versus confidentiality have generally found few differences across conditions [186–188].
Some measures of the propensity to provide socially desirable answers have been found to be associated with substance use reporting such that likelihood of providing socially desirable responses in general is associated with less likelihood of reporting alcohol and/or drug use behavior [172, 189–191]. These findings have been interpreted alternatively as (a) evidence that underreporting of substance use is a consequence of respondent attempts to conceal illicit behavior or as (b) evidence that persons who engage in socially desirable behaviors in general also report, accurately, that they do not engage in substance use behaviors. Although this question remains unresolved, we note that other research has demonstrated the absence of an association between one measure of social desirability, the Crowne-Marlowe scale [173], and a measure of cocaine use underreporting that was based on comparisons of self-reports with biological assays [192].
The accuracy of self-reports of substance use behaviors may also vary by the race/ethnicity of the respondent. A literature review of 36 published studies conducted in the US found consistent evidence of lower reliability and validity rates of substance use reporting among racial and ethnic minority populations [193]. More recent studies have reported similar findings [194, 195]. The specific source of these differences, however, is not clearly understood. Models that have been proposed suggest that greater reporting errors among minority groups may be a consequence of differential group educational achievement and question comprehension, greater minority concerns with privacy, discrimination and risk of prosecution, and/or stronger effects of social desirability pressures on minority groups to report behaviors that conform to majority cultural values. Internationally, cultural differences in normative patterns of alcohol consumption and other substance use may also influence degree of response editing. In nations where wine is considered part of a meal, rather than mood-altering substance, underreporting might be expected to be much less of a concern.
One limitation in much of the research reviewed here is the assumption that greater self-reports of substance use behaviors are more valid [196, 197]. Indeed, overreporting is another measurement concern [197, 198]. There have been cases of respondents providing daily alcohol use reports that are physically impossible [4]. In surveys of adolescents, there is also a widespread belief that some respondents overreport their alcohol and other drug use, possibly to impress peers and improve one's social status or as part of a general desire for attention [3, 149, 199–202]. Gfroerer and colleagues [21] have speculated that such overreporting of substance use might be more likely to happen during school-based surveys, usually conducted in classroom settings, where peers may be more likely to be aware of respondent answers. It has also been suggested that respondents may in some situations elect to present themselves in a highly negative manner, perhaps for personal amusement or to obtain treatment services [11, 148, 203, 204]. In an effort to identify such overreporters, several investigators have asked respondents about their use of substances that do not exist [205]. It is notable that these studies have found very low self-reported rates of use of these fictitious substances. Petzel et al. [206], for example, found that 4% of his sample of high school students reported the use of the nonexisting drug “bindro.” They also found that those who reported the use of a nonexistent drug also reported more use of all other drugs included in their survey, compared to those who indicated, correctly, that they did not use “bindro.” Others have reported similar findings when asking survey respondents about the use of nonexistent substances [202, 207–209]. Of course, it may be that heavy drug users just assume, incorrectly, that they have used all available substances at one time or another in their past.
Others have questioned whether or not it is correct to assume that all substance users will be hesitant to accurately report on their patterns of use. Wish et al. [210], for example, have suggested that heavy substance users may be less concerned about social and other consequences of reporting such information. Interviews with persons receiving treatment, though, have found little interest in publicly discussing their patterns of use [211].
Concern with the accuracy of substance use reporting has led to a variety of attempts to validate or corroborate survey responses. For example, several panel surveys have demonstrated considerable stability in respondent reporting of substance use over time [22, 212, 213]. Research, however, has also investigated the recanting of drug and alcohol use, which is the tendency of some panel survey respondents to claim no lifetime experience with a given substance, when they have previously reported having used it [200]. Recanting has been identified in responses to both alcohol [214] and drug use questions [119–121, 150, 201, 215–220]. Depending on the age group being surveyed (adults versus adolescents), recanting may represent deliberate efforts to deny previously reported activity, exaggerations regarding behaviors that never actually took place, poor comprehension of survey questions during at least one wave of interviews, poor recall of information, or simple carelessness when answering [200, 217]. Research by Martino et al. [221] suggests that recanting is a consequence of both deliberate misreporting and errors in understanding of survey questions. In surveys of adolescents, one possible explanation for recanting is that younger and less mature respondents may be more likely to exaggerate substance use during surveys conducted in classroom settings in which peers might be aware of one another's answers and that they may then provide more accurate answers during subsequent survey waves as they subsequently become more mature [215]. Interestingly, longitudinal follow-ups with Monitoring and the Future Survey respondents have found that recanting is greater among adults with occupations that might be expected to strongly sanction the use of illicit substances, such as those associated with the military and law enforcement [150]. Percy et al. [201] have also documented increased recanting among adolescents who had received drug education during the study period, suggesting a potentially biasing effect of education on self-reports. Higher recanting among low level substance users has also been reported [201, 216].
Other research has sought to validate self-reported substance use behavior by comparing those reports to toxicological findings from biospecimens collected at the time that interviews are conducted. One of the first studies conducted with a community sample (in Chicago) by Fendrich et al. [222] indicated that recent cocaine and heroin use estimates obtained from hair testing were considerably higher than were self-reports obtained from the same respondents. A follow-up survey found that higher rates of cocaine and heroin were obtained from drug assays of hair, saliva, and urine samples, compared to self-reports from respondents to a community survey [178]. A higher estimate of marijuana use, though, was derived from self-reports, compared to drug test assays, a finding that was interpreted as evidence of the limitations of hair testing for the detection of marijuana use. Similar findings of underreporting of cocaine and heroin have also been obtained from general population surveys conducted in Puerto Rico by Colón and colleagues [223, 224] and of men who have sex with men in Chicago [225]. Another study conducted as part of the NSDUH investigated agreement between self-reported use of marijuana and cocaine and urine tests concluded that “most youths aged 12 to 17 and young adults aged 18 to 25 reported their recent drug use accurately” ([226] page 4). Ledgerwood et al. [195] examined the association between hair testing and self-reported illicit drug use, concluding that agreement between tests and self-reports to be substantial for marijuana and cocaine, moderate for opiates, and fair for methamphetamines. Other research has employed urinalysis [227] and hair assays [228] to document drug use frequency underreporting among drug users receiving treatment. While providing valuable insights, it is important to acknowledge that each of these sources of confirmatory biological information is also imperfect measure of substance use, suffering from a variety of limitations, including imprecise and variable detection windows, vulnerability to contamination, and individual and race/ethnic group variability in rates of chemical absorption and retention [229, 230].
Another approach to validating self-reports of substance use is to compare information obtained from respondents with those of significant others, a strategy that has found good but far from perfect levels of corroboration [202, 208, 231–233]. Parents and children have also been asked to corroborate one another's reports of alcohol use. In a Dutch study, Engels et al. [234] found that both children and parents underestimate one another's alcohol consumption to some extent and that underestimation of adolescent alcohol consumption by parents was related to lack of knowledge and control of their children's activities. An important caveat when employing this approach is that proxy and self-reports generally suffer from the same sources of error [235]. Interestingly, perceptions of untrustworthiness by others have also been found to be associated with drug use recanting among adolescents in a study reported by Weinfurt and Bush [236].
An aggregate level strategy for evaluating self-reports of alcohol use is through comparisons of alcohol sales and tax information. A number of studies have taken this approach and have consistently found evidence suggestive that survey self-reports in some cases vastly underestimate total alcohol consumption [237–240]. State-level estimates from self-reports, though, do correlate fairly strongly with the estimates from sales/tax data, suggesting sensitivity to variations in substance use behavior [238]. One recent study that compared self-reports of alcohol purchases, rather than self-reported alcohol consumption, found much closer agreement between total estimates developed from those self-reports in comparison to total retail alcohol sales in Sweden [241]. Interestingly, this study also found considerable variability by type of alcohol, with sales of wine far more accurately reported than beer and spirits, suggesting the possibility that social desirability concerns may be at least partially responsible, given that wine is likely viewed as a more socially desirable alcoholic beverage, at least in the Swedish context. Reporting of wine consumption was also found to be more complete in a Canadian study [242].
One strategy designed to provide respondents with greater privacy when speaking with interviewers about highly sensitive questions such as substance use behavior is the randomized response technique, first proposed by Warner [243]. Several studies have documented the usefulness of this procedure among both students and adults. Goodstadt and Grusin [244] found higher drug use reporting for five of six substances among high school students in Ontario. Weissman et al. [245] compared substance use self-reports obtained with and without the use of the randomized response technique during telephone interviews conducted as part of a general household survey in New York City and also found increased reporting for three of four substances when using the randomized response technique. An important drawback noted, though, was that only 52% of those randomly assigned to respond using this technique actually agreed to do so. In contrast, McAuliffe et al. [246] reported no differences in reports of illicit drug use among those responding via the randomized response technique, compared to those answering direct questions. Some limitations of this technique include the challenge of correctly administering it in practice and its ability to provide aggregate estimates only [199].
The bogus pipeline is another approach that has been employed in attempts to induce more accurate reporting of substance use behavior. This involves the ethically questionable practice of leading respondents to believe that their questionnaire responses will be validated using some alternative means, when in fact the investigator has no intention of doing so. Rather, the implied threat of validating respondent answers is used to exert pressure on respondents to answer more truthfully. In general, however, the use of the bogus pipeline procedure has failed to obtain higher estimates of substance use behavior, at least among adolescents [247–249]. A meta-analysis has confirmed the nonefficacy of the bogus pipeline procedure for improved reporting of alcohol consumption and marijuana use [250]. One subsequent study, by Tourangeau et al. [251], did however demonstrate the effectiveness of the bogus pipeline technique for increasing respondent reporting of sensitive behaviors, including alcohol and illicit drug use. In addition, a special population study has suggested that the bogus pipeline procedure may be successful in improving self-reports under certain conditions. Lowe et al. [252] found that, among pregnant women, those randomly assigned to a bogus pipeline condition were nearly twice as likely to report alcohol consumption when completing a self-administered questionnaire.
Finally, when considering respondent related reporting errors, it is highly likely that multiple sources of respondent related reporting errors are operating simultaneously. For example, Johnson and Fendrich [253] demonstrated, using latent measures of cognitive processing difficulties constructed using debriefing probes, that social desirability concerns were predictive of discordant drug use reporting and drug use underreporting, while memory difficulties were predictive of drug use overreporting.
4.2.3. Interviewer Effects
Interviewers can introduce errors by misreading questions, failing to probe answers correctly, not following other elements of standardized survey protocols, and by deliberate falsification of survey interviews [254, 255]. Interviewer affiliation with governmental agencies may also influence respondent willingness to report substance use behaviors [256]. Interestingly and somewhat counterintuitively, interviewers with no prior project-related experience have been found to generate higher levels of marijuana and cocaine reporting in a national substance use survey [130, 257]. Research by Chromy et al. [258] also finds that more experienced interviewers achieve higher response rates, in addition to eliciting fewer reports of substance use behaviors, suggesting they may be more successful in gaining cooperation from nonsubstance users who might find a survey on this topic to be less personally salient or interesting, although they do not believe that this fully accounts for the observed differences, which remain unaccounted for.
Another possible mechanism that may account for interviewer effects involves social distance. It is possible that the social distance between respondents and interviewers may influence respondent willingness to report sensitive behaviors such as substance use. Johnson and colleagues [259] found that adult respondents in a telephone survey regarding substance use treatment needs in Illinois were more likely to report recent and lifetime drug use when respondent-interviewer dyads were characterized as having relatively little social distance. In that study, social distance was measured using a simple count of the number of shared demographic identities (i.e., same gender, same race/ethnicity, similar age, and similar educational attainment). Johnson and colleagues [260] also explored the effects of social distance between race/ethnic groups in a study in which they probed respondents regarding how comfortable or uncomfortable they would feel when interviewed about their alcohol consumption patterns by interviewers from the same and from other cultural groups. When asked how they would feel if interviewed by an interviewer with the same background, large majorities of African American (88.8%), Mexican American (74.7%), Puerto Rican (85.9%), and non-Hispanic white (92.9%) respondents indicated they would feel comfortable. However, when asked how they would feel if the interviewer asking about their alcohol use was from another cultural group, the proportions indicating they would continue to feel comfortable decreased to 60.0% of African Americans and Mexican Americans and 69.4% of Puerto Ricans. Among non-Hispanic whites, though, the proportion indicating they would continue to be comfortable remained very high (89.3%), suggesting group differences in reactions to interviewers of similar versus different race/ethnic backgrounds.
Other research has also examined the effects on substance use reporting of similarities and differences in various demographic characteristics between interviewers and respondents. In studies conducted in Iowa many years ago, female respondents were more likely to report alcohol consumption to male interviewers, and conversely, male respondents were more likely to report alcohol use to female interviewers [261]. Johnson and Parsons [262] found that homeless respondents were more likely to report drug use to male interviewers, a finding that they linked to a “likely user” hypothesis that suggests that male interviewers were more likely to elicit positive substance use reports because their gender is perceived as being more likely to be substance users themselves and more tolerant of substance use by others. In contrast, a study conducted by Darrow and colleagues [263] reported that gay males were more likely to report drug use to female interviewers, who were viewed as having greater empathy and sympathy for deviant behavior than would male interviewers. In a survey conducted in The Netherlands, higher rates of alcohol use were reported by Turkish and Moroccan respondents to Dutch interviewers, compared to interviewers who were ethnically matched [124]. These researchers also hypothesized that minority respondents may have either (a) exaggerated their alcohol consumption to comply with the perceived norms of the person interviewing them or (b) underreported, or denied altogether, the use of alcohol when interviewed by interviewers from an Islamic background who would have been perceived as having a far less permissive opinion of alcohol use. This limited evidence does not suggest a clear pattern of effects of any interviewer characteristics on respondent self-reports of substance use behaviors, although it does seem likely that interviewer characteristics do matter in many situations.
Interviewer-respondent familiarity with one another may also influence the quality of self-reported substance use behaviors. For example, Mensch and Kandel [168] found that, in the panel survey of National Longitudinal Survey of Youth, marijuana use reporting was lower among respondents who had been interviewed more times previously by the same interviewer, suggesting that interviewer familiarity cued respondents regarding social desirability expectations, which depressed their drug use reporting. Ironically, again, the use of experienced survey interviewers, something that would typically be considered an important strength of any study, would appear in some circumstances to be a factor contributing to lower quality data, at least when interviewers are serially assigned to the same subsets of respondents.
4.2.4. Context Effects
Various aspects of the social and physical environment within which survey data are collected may also influence the quality of the information collected. One aspect of the social environment that has received attention is the absence or presence of other individuals during the interview, as this is believed to influence the social desirability demands or pressures that respondents may perceive. In general, the presence of others during survey interviews is known to be associated with lower reporting of sensitive behaviors, including substance use. In an early study, Wilson [81] noted that, when interviews were conducted in the presence of another person, average weekly alcohol consumption was lower, compared to interviews conducted in private. Similar findings were reported by Edwards et al. [264], but only among males. Several studies of adolescent reporting of alcohol and drug use also found that the presence of a parent during a household interview reduces respondent willingness to report such behaviors [127, 265–268]. In contrast, Hoyt and Chaloupka [127] also reported that the presence of friends during an interview increased substance use reporting, and Aquilino et al. [266] reported that the presence of a spouse or significant other had no effect on reports of alcohol and drug use. It is important to recognize, though, some potential confounding, as those most likely to have another person present during an interview are those who are married, and those who have children, and these variables are also commonly associated with less substance use behavior.
The physical context within which interviews take place may also influence social desirability pressures and self-report quality. Much of this evidence comes from comparisons of adolescent survey responses when the surveys are completed at home versus in a school setting. In school settings, parental monitoring is likely to be perceived as less of a concern and confidentiality assurance likely to be more credible. Findings support this supposition, as Brener et al. [132] and others [21, 269–271] have reported that adolescents will underreport substance use during household surveys, relative to school-based surveys. Needle and colleagues [272] and Zanes and Matsoukas [273], though, did not find differences in the reports obtained from students in school- versus home-based settings.
4.3. Processing Errors
Once data collection is complete, the construction of a final survey data set requires the implementation of numerous coding and editing rules. The integrity of these rules is particularly critical in substance use surveys, as they typically involve assumptions about the reporting intentions and substance use behaviors of respondents. Fendrich and Johnson [274] have documented important differences in the editing assumptions made across national surveys of substance use in the US that can substantially influence the prevalence estimates generated by each.
Investigators also use a variety of techniques to screen completed substance use questionnaires for inclusion in final data files. Farrell and colleagues [207] examined the effects of excluding respondents (1) who provided a large number of inconsistent answers and (2) who reported use of a fictitious substance. The effects of excluding these responses on prevalence estimates were considered to be minimal, although they cautioned that exclusionary criteria should be used carefully in order to avoid producing nonrepresentative results.
Also, a past report by the US General Accounting Office [6] identified imputation problems in the National Household Survey on Drug Abuse in which the estimated number of past year heroin users in the US ranged from 232,000 to 701,000, as a consequence of whether missing data imputation procedures were or were not used. The same report also indicated that sample weights used to construct subgroup estimates of the total number of illicit drug users were in some instances based on extremely small numbers of individuals in some weighting cells who reported current drug use. In one case from the 1991 NHSDA, a single 79-year-old woman was projected to represent approximately 142,000 persons believed to have used heroin during the previous year. In such instances, a single erroneous data entry could be expected to have dramatic effects on overall survey estimates.
4.4. Inferential Errors
Inferential errors can be avoided by insuring that the survey questions being employed and the respondents being sampled are representative of the constructs and populations to which the researcher plans to make inferences. To the degree that either the measures or sample fail to represent their intended objects, inferential errors will be realized. Avoiding inferential errors also entails employing sound research designs and appropriate analytic procedures. Experimental findings are considered the strongest evidence for internal validity, and representative samples provide the strongest evidence for external validity. When research designs deviate from these ideals or measures do not adequately assess the constructs of interest, there is a risk of inferential errors that will limit the generalizability of empirical findings. In substance use research, errors of inference can be of several types. Some are a consequence of erroneously concluding that associations between constructs do not exist, due to poor measures and/or research designs. Others involve falsely concluding that associations do exist between constructs when they in fact do not, also as a consequence of inadequate designs and/or measures. The failure to properly adjust a high quality substance use survey for its stratified sample design, discussed earlier in Section 3.4, is an example of an adjustment error that led to a serious inferential error when investigators erroneously concluded that a large fraction of all alcohol sales in the US were being made to underage minors.
5. Discussion
Over several decades, considerable knowledge has been accumulated regarding sources of error in the survey assessment of substance use behaviors. Important gaps remain, however, and continued research will be necessary. Below, I highlight some of the most important questions that I see relevant to each source of survey errors that have been considered in this paper.
Regarding coverage errors, the challenge of constructing representative sample frames for both adolescents and adults continues to increase as electronic communications platforms further diversify. This is a general problem that afflicts all survey research efforts but one that can be particularly problematic for substance use research given the associations between these behaviors and likelihood of being covered by many of the potential sources of sample frames. Identification of supplemental frames that might provide better coverage of heavy substance users and which could be employed, with appropriate weights, as supplements to more traditional sample frames when conducting population surveys should be considered.
When survey estimates are reported, sampling errors, in the form of standard errors or confidence intervals, are commonly included. Although reporting these errors is important to survey transparency, it is important to recognize that sampling errors make strong assumptions that can seldom be met in practice. Most importantly, they assume the absence of all other sources of survey error. Given the unlikeliness of this assumption, merely reporting sampling errors can leave survey consumers with a false sense of the precision of survey estimates, as any sampling errors could be completely overwhelmed by any measurement and/or nonresponse errors, for example, in practice. Understanding how sampling errors in substance use surveys may be influenced by other sources of survey error thus seems to be an important research question to be addressed in the future.
Nonresponse errors seem to be another permanent concern that substance use surveys will need to continually address. Of course, the degree to which nonresponse may bias survey findings will vary from topic to topic and question to question. Given the strong associations detected between substance use and nonresponse patterns, it appears that this error source is also particularly relevant for surveys on this topic. An important issue for additional research is the relative usefulness for substance use surveys of the various nonresponse bias analytic strategies reviewed earlier in this paper. Similarly, research into the relative efficacy of various types of adjustments for nonresponse and other forms of error in substance use surveys would seem to be an important future research topic.
In general, there has been little research into specification errors in substance use surveys. This is an oversight, given general acknowledgment that researchers and potential respondents do not always have a shared understanding of the behaviors being examined. Development of strategies for identifying and investigating potential errors of specification is another research topic in need of attention.
It is my personal opinion that the multiple sources of measurement errors reviewed earlier in this paper pose the greatest threat to the accurate assessment of substance use behaviors. There are several practical questions that remain unresolved, such as the predictive power of social desirability measures, the reasons why experienced interviewers appear to obtain fewer reports of substance use behaviors, and the degree to which adolescents might actually overreport their use of alcohol and/or other drugs. Perhaps even more important, how these widely diverse sets of measurement errors interact with one another is poorly understood and remains largely unexamined. Evaluation of how various sources of measurement errors in substance use surveys interact together to influence survey estimates should be a priority for future research.
In terms of processing errors, surveys concerned with substance use would appear on the surface to be no more vulnerable than other types of survey research. Yet, the complexity of most substance use questionnaires, combined with greater item nonresponse rates in many instances, likely provide greater risks for processing errors that can be linked to complex editing rules and assumptions. A general rule of thumb is that the likelihood of experiencing processing errors is inversely associated with the amount of documentation provided with a survey, as careful documentation is an important indicator of quality research. Continued research into the veracity of data editing decision rules, particularly when handling missing data and/or inconsistent self-reports in substance use surveys, would certainly be welcomed.
As with all other sources of survey related error, inferential errors are not unique to substance use surveys. They are in general a product of poor study design and execution that can seriously limit the value of otherwise commendable efforts. A key to addressing potential inferential errors in research is replication. Study findings take on additional credibility and are accorded stronger inference to the degree that they can be replicated in subsequent investigations. Substance use researchers should seek opportunities to replicate findings from other researchers when conducting their own original studies. And journal editors can provide additional service to science by finding ways to make space available for publishing replication studies that are essential to addressing problems of inferential errors that may otherwise go undetected.
It is important to note that the review presented in this paper was not based on a systematic database search. Rather, it is based on the author's personal familiarity with and experience working with this literature over the past several decades. This should be recognized as a limitation.
Finally, it is strongly recommended that substance use researchers who plan to employ survey research methods recognize and report on their efforts to address each of the potential sources of survey related error discussed in this paper. Developing strategies to systematically and rigorously confront each source of errors and transparently sharing one's successes and failures remains the best approach to minimizing the effects of each when using survey methods to investigate substance use patterns and behaviors.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
References
- 1.Hochhauser M. Bias in drug abuse survey research. International Journal of the Addictions. 1979;14(5):675–687. doi: 10.3109/10826087909041899. [DOI] [PubMed] [Google Scholar]
- 2.Merachnik D. Why initiate a drug survey?. In: Einstein S., Allen S., editors. Proceedings of the 1st International Conference on Student Drug Surveys; 1972; Farmingdale, NY, USA. Baywood; [Google Scholar]
- 3.Midanik L. The validity of self-reported alcohol consumption and alcohol problems: a literature review. British Journal of Addiction. 1982;77(4):357–382. doi: 10.1111/j.1360-0443.1982.tb02469.x. [DOI] [PubMed] [Google Scholar]
- 4.Pernanen K. Validity of survey data on alcohol use. In: Gibbins R., Israel Y., Kalant H., Popham R., Schmidt W., Smart R., editors. Research Advances in Alcohol and Drug Problems. Vol. 1. New York, NY, USA: John Wiley & Sons; 1974. pp. 355–374. [Google Scholar]
- 5.Popham R. E., Schmidt W. Words and deeds: the validity of self-report data on alcohol consumption. Journal of Studies on Alcohol. 1981;42(3):355–358. doi: 10.15288/jsa.1981.42.355. [DOI] [PubMed] [Google Scholar]
- 6.United States General Accounting Office. GAO/PEMD-93-18. Washington, DC, USA: Program Evaluation and Methodology Division; 1993. Drug use measurement: strengths, limitations, and recommendations for improvement. [Google Scholar]
- 7.Weisner C., Schmidt L., Tam T. Assessing bias in community-based prevalence estimates: towards an unduplicated count of problem drinkers and drug users. Addiction. 1995;90(3):391–405. doi: 10.1111/j.1360-0443.1995.tb03786.x. [DOI] [PubMed] [Google Scholar]
- 8.Groves R. M. Survey Errors and Survey Costs. New York, NY, USA: John Wiley & Sons; 1989. [Google Scholar]
- 9.Lavrakas P. J. Applying a total error perspective for improving research quality in the social, behavioral, and marketing sciences. Public Opinion Quarterly. 2013;77:831–850. [Google Scholar]
- 10.Crawford A. Bias in a survey of drinking habits. Alcohol and Alcoholism. 1987;22(2):167–179. [PubMed] [Google Scholar]
- 11.De Lint J. ‘Words and deeds’: responses to Popham and Schmidt. Journal of Studies on Alcohol. 1981;42(3):359–361. doi: 10.15288/jsa.1981.42.355. [DOI] [PubMed] [Google Scholar]
- 12.Reardon M. L., Burns A. B., Preist R., Sachs-Ericsson N., Lang A. R. Alcohol use and other psychiatric disorders in the formerly homeless and never homeless: prevalence, age of onset, comorbidity, temporal sequencing, and service utilization. Substance Use & Misuse. 2003;38(3-6):601–644. doi: 10.1081/JA-120017387. [DOI] [PubMed] [Google Scholar]
- 13.Delnevo C. D., Gundersen D. A., Hagman B. T. Declining estimated prevalence of alcohol drinking and smoking among young adults nationally: artifacts of sample undercoverage? American Journal of Epidemiology. 2008;167(1):15–19. doi: 10.1093/aje/kwm313. [DOI] [PubMed] [Google Scholar]
- 14.Blumberg S. J., Luke J. V., Cynamon M. L. Telephone coverage and health survey estimates: evaluating the need for concern about wireless substitution. American Journal of Public Health. 2006;96(5):926–931. doi: 10.2105/AJPH.2004.057885. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Blumberg S. J., Luke J. V. Reevaluating the need for concern regarding noncoverage bias in landline surveys. American Journal of Public Health. 2009;99(10):1806–1810. doi: 10.2105/AJPH.2008.152835. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Livingston M., Dietze P., Ferris J., Pennay D., Hayes L., Lenton S. Surveying alcohol and other drug use through telephone sampling: a comparison of landline and mobile phone samples. BMC Medical Research Methodology. 2013;13(1, article 41) doi: 10.1186/1471-2288-13-41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hu S. S., Balluz L., Battaglia M. P., Frankel M. R. Improving public health surveillance using a dual-frame survey of landline and cell phone numbers. American Journal of Epidemiology. 2011;173(6):703–711. doi: 10.1093/aje/kwq442. [DOI] [PubMed] [Google Scholar]
- 18.Blumberg S. J., Luke J. V. Wireless Substitution: Early Release of Estimates from the National Health Interview Survey, July–December 2012. National Center for Health Statistics; 2013. http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201306.pdf. [PubMed] [Google Scholar]
- 19.Chavez E. L., Edwards R., Oetting E. R. Mexican American and white American school dropouts' drug use, health status, and involvement in violence. Public Health Reports. 1989;104(6):594–604. [PMC free article] [PubMed] [Google Scholar]
- 20.Swaim R. C., Beauvais F., Chavez E. L., Oetting E. R. The effect of school dropout rates on estimates of adolescent substance use among three racial/ethnic groups. American Journal of Public Health. 1997;87(1):51–55. doi: 10.2105/AJPH.87.1.51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Gfroerer J., Bose J., Kroutil L., Lopez M., Kann L. Methodological considerations in estimating adolescent substance use. Proceedings of the Joint Statistical Meetings, Section on Survey Research Methods; 2012; pp. 4127–4140. [Google Scholar]
- 22.Bachman J. G., Johnston L. D., O'Malley P. M. Smoking, drinking, and drug use among American high school students: correlates and trends, 1975–1979. American Journal of Public Health. 1981;71(1):59–69. doi: 10.2105/ajph.71.1.59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Cowan C. D. Coverage, sample design, and weighting in three federal surveys. Journal of Drug Issues. 2001;31(3):599–614. [Google Scholar]
- 24.Guttmacher S., Weitzman B. C., Kapadia F., Weinberg S. L. Classroom-based surveys of adolescent risk-taking behaviors: reducing the bias of absenteeism. American Journal of Public Health. 2002;92(2):235–237. doi: 10.2105/AJPH.92.2.235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Kandel D. Reaching the hard-to-reach: illicit drug use among high school absentees. Addictive Diseases. 1975;1(4):465–480. [PubMed] [Google Scholar]
- 26.Heckathorn D. D. Respondent-driven sampling: a new approach to the study of hidden populations. Social Problems. 1997;44(2):174–199. doi: 10.2307/3096941. [DOI] [Google Scholar]
- 27.Heckathorn D. D. Respondent-driven sampling II: deriving valid population estimates from chain-referral samples of hidden populations. Social Problems. 2002;49(1):11–34. doi: 10.1525/sp.2002.49.1.11. [DOI] [Google Scholar]
- 28.Bauermeister J. A., Zimmerman M. A., Johns M. M., Glowacki P., Stoddard S., Volz E. Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent- driven sampling (webRDS) strategy. Journal of Studies on Alcohol and Drugs. 2012;73(5):834–838. doi: 10.15288/jsad.2012.73.834. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.McKnight C., des Jarlais D., Bramson H., et al. Respondent-driven sampling in a study of drug users in New York City: notes from the field. Journal of Urban Health. 2006;83(supplement 1):54–59. doi: 10.1007/s11524-006-9102-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Ramirez-Valles J., Kuhns L. M., Campbell R. T., Diaz R. M. Social integration and health: Community involvement, stigmatized identities, and sexual risk in latino sexual minorities. Journal of Health and Social Behavior. 2010;51(1):30–47. doi: 10.1177/0022146509361176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Clatts M. C., Goldsamt L. A., Yi H. Club drug use among young men who have sex with men in NYC: a preliminary epidemiological profile. Substance Use and Misuse. 2005;40(9-10):1317–1330. doi: 10.1081/JA-200066898. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Halkitis P. N., Fischgrund B. N., Parsons J. T. Explanations for methamphetamine use among gay and bisexual men in New York City. Substance Use & Misuse. 2005;40(9-10):1331–1345. doi: 10.1081/JA-200066900. [DOI] [PubMed] [Google Scholar]
- 33.Kassira E., Swetz A., Bauserman R., Tomoyasu N., Caldeira E., Solomon L. HIV and AIDS surveillance among inmates in Maryland prisons. Journal of Urban Health. 2001;78(2):256–263. doi: 10.1093/jurban/78.2.256. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Safika I., Johnson T. P., Levy J. A. A venue analysis of predictors of alcohol use prior to sexual intercourse among female sex workers in Senggigi, Indonesia. International Journal of Drug Policy. 2011;22(1):49–55. doi: 10.1016/j.drugpo.2010.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Kaplan C. D., Korf D., Sterk C. Temporal and social contexts of heroin-using populations. An illustration of the snowball sampling technique. The Journal of Nervous and Mental Disease. 1987;175(9):566–574. doi: 10.1097/00005053-198709000-00009. [DOI] [PubMed] [Google Scholar]
- 36.Sharma A. K., Aggarwal O. P., Dubey K. K. Sexual behavior of drug-users: is it different? Preventive Medicine. 2002;34(5):512–515. doi: 10.1006/pmed.2002.1010. [DOI] [PubMed] [Google Scholar]
- 37.Fernández M. I., Bowen G. S., Varga L. M., et al. High rates of club drug use and risky sexual practices among hispanic men who have sex with men in Miami, Florida. Substance Use and Misuse. 2005;40(9-10):1347–1581. doi: 10.1081/JA-200066904. [DOI] [PubMed] [Google Scholar]
- 38.Muhib F. B., Lin L. S., Stueve A., et al. A venue-based method for sampling hard-to-reach populations. Public Health Reports. 2001;116(1):216–222. doi: 10.1093/phr/116.S1.216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Valleroy L. A., Mackellar D. A., Karon J. M., et al. HIV prevalence and associated risks in young men who have sex with men. Journal of the American Medical Association. 2000;284(2):198–204. doi: 10.1001/jama.284.2.198. [DOI] [PubMed] [Google Scholar]
- 40.Barrett S. P., Gross S. R., Garand I., Pihl R. O. Patterns of simultaneous polysubstance use in Canadian rave attendees. Substance Use & Misuse. 2005;40(9-10):1525–1537. doi: 10.1081/JA-200066866. [DOI] [PubMed] [Google Scholar]
- 41.Levy K. B., O'Grady K. E., Wish E. D., Arria A. M. An in-depth qualitative examination of the ecstasy experience: results of a focus group with ecstasy-using college students. Substance Use and Misuse. 2005;40(9-10):1427–1441. doi: 10.1081/JA-200066810. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.McElrath K. MDMA and sexual behavior: ecstasy users' perceptions about sexuality and sexual risk. Substance Use and Misuse. 2005;40(9-10):1461–1585. doi: 10.1081/JA-200066814. [DOI] [PubMed] [Google Scholar]
- 43.Groves R. M., Couper M. P. Nonresponse in Household Surveys. New York, NY, USA: John Wiley & Sons; 1998. [Google Scholar]
- 44.Groves R. M., Dillman D. A., Eltinge J. L., Little R. J. A. Survey Nonresponse. New York, NY, USA: John Wiley & Sons; 2002. [Google Scholar]
- 45.Steeh C., Kirgis N., Cannon B., DeWitt J. Are they really as bad as they seem? Nonresponse rates at the end of the 20th century. Journal of Official Statistics. 2001;17:227–247. [Google Scholar]
- 46.Johnson T. P., Owens L. Survey response rate reporting in the professional literature. Proceedings of the Section on Survey Research Methods; 2004; Alexandria, Va, USA. American Statistical Association; pp. 127–133. [Google Scholar]
- 47.Groves R. M., Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opinion Quarterly. 2008;72(2):167–189. doi: 10.1093/poq/nfn011. [DOI] [Google Scholar]
- 48.Keeter S., Miller C., Kohut A., Groves R. M., Presser S. Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly. 2000;64(2):125–148. doi: 10.1086/317759. [DOI] [PubMed] [Google Scholar]
- 49.Merkle D., Edelman M. Nonresponse in exit polls: a comprehensive analysis. In: Groves R. M., Dillman D. A., Eltinge J. L., Little R. J. A., editors. Survey Nonresponse. New York, NY, USA: John Wiley & Sons; 2002. pp. 243–258. [Google Scholar]
- 50.Plant M. A., Chick J., Kreitman N. The effects of response rates on levels of self-reported alcohol consumption and alcohol-related problems: conclusions from a Scottish study. British Journal on Alcohol and Alcoholism. 1980;15(4):158–163. [Google Scholar]
- 51.Cottler L. B., Zipp J. F., Robins L. N., Spitznagel E. L. Difficult-to-recruit respondents and their effect on prevalence estimates in an epidemiologic survey. American Journal of Epidemiology. 1987;125(2):329–339. doi: 10.1093/oxfordjournals.aje.a114534. [DOI] [PubMed] [Google Scholar]
- 52.Tibblin G. A population study of 50-year-old men: an analysis of the non-participation group. Acta Medica Scandinavica. 1965;178(4):453–459. doi: 10.1111/j.0954-6820.1965.tb04290.x. [DOI] [PubMed] [Google Scholar]
- 53.Cohen G., Duffy J. C. Are nonrespondents to health surveys less healthy than respondents? Journal of Official Statistics. 2002;18(1):13–23. [Google Scholar]
- 54.Hoeymans N., Feskens E. J. M., van Den Bos G. A. M., Kromhout D. Non-response bias in a study of cardiovascular diseases, functional status and self-rated health among elderly men. Age and Ageing. 1998;27(1):35–40. doi: 10.1093/ageing/27.1.35. [DOI] [PubMed] [Google Scholar]
- 55.Romelsjö A. The relationship between alcohol consumption and social status in Stockholm. Has the social pattern of alcohol consumption changed? International Journal of Epidemiology. 1989;18(4):842–851. doi: 10.1093/ije/18.4.842. [DOI] [PubMed] [Google Scholar]
- 56.Lahaut V. H. M. C. J., Jansen H. A. M., van de Mheen D., Garretsen H. F. L. Non-response bias in a sample survey on alcohol consumption. Alcohol and Alcoholism. 2002;37(3):256–260. doi: 10.1093/alcalc/37.3.256. [DOI] [PubMed] [Google Scholar]
- 57.Caspar R. A. Follow-up of non-respondents in 1990. In: Turner C., Lessler J., Gfroerer J., editors. Survey Measurement of Drug Use: Methodological Studies. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 155–173. (DHHS Pub. No., ADM 92-1929). [Google Scholar]
- 58.Gmel G. The effect of mode of data collection and of non-response on reported alcohol consumption: a split-sample study in Switzerland. Addiction. 2000;95(1):123–134. doi: 10.1046/j.1360-0443.2000.95112313.x. [DOI] [PubMed] [Google Scholar]
- 59.Iversen L., Klausen H. Alcohol consumption among laid-off workers before and after closure of a Danish ship-yard: a 2-year follow-up study. Social Science and Medicine. 1986;22(1):107–109. doi: 10.1016/0277-9536(86)90314-X. [DOI] [PubMed] [Google Scholar]
- 60.Lemmens P. H. H. M., Tan E. S., Knibbe R. A. Bias due to non-response in a Dutch survey on alcohol consumption. British Journal of Addiction. 1988;83(9):1069–1077. doi: 10.1111/j.1360-0443.1988.tb00534.x. [DOI] [PubMed] [Google Scholar]
- 61.Kish L. Survey Sampling. New York, NY, USA: John Wiley & Sons; 1965. [Google Scholar]
- 62.Zhao J., Stockwell T., Macdonald S. Non-response bias in alcohol and drug population surveys. Drug and Alcohol Review. 2009;28(6):648–657. doi: 10.1111/j.1465-3362.2009.00077.x. [DOI] [PubMed] [Google Scholar]
- 63.Plant M. A., Miller T. I. Disguised and undisguised questionnaires compared: two alternative approaches to drinking behaviour surveys. Social Psychiatry. 1977;12(1):21–24. doi: 10.1007/BF00578978. [DOI] [Google Scholar]
- 64.Bailey S. L., Flewelling R. L., Rachal J. V. The characterization of inconsistencies in self-reports of alcohol and marijuana use in a longitudinal study of adolescents. Journal of Studies on Alcohol. 1992;53(6):636–647. doi: 10.15288/jsa.1992.53.636. [DOI] [PubMed] [Google Scholar]
- 65.Beard C. M., Lane A. W., O'Fallon W. M., Riggs B. L., Melton L. J., III Comparison of respondents and nonrespondents in an osteoporosis study. Annals of Epidemiology. 1994;4(5):398–403. doi: 10.1016/1047-2797(94)90075-2. [DOI] [PubMed] [Google Scholar]
- 66.Bucholz K. K., Shayka J. J., Marion S. L., Lewis C. E., Pribor E. F., Rubio D. M. Is a history of alcohol problems or of psychiatric disorder associated with attrition at 11-year follow-up? Annals of Epidemiology. 1996;6(3):228–234. doi: 10.1016/1047-2797(96)00002-6. [DOI] [PubMed] [Google Scholar]
- 67.Caetano R., Ramisetty-Mikler S., McGrath C. Characteristics of non-respondents in a US national longitudinal survey on drinking and intimate partner violence. Addiction. 2003;98(6):791–797. doi: 10.1046/j.1360-0443.2003.00407.x. [DOI] [PubMed] [Google Scholar]
- 68.Goldberg M., Chastang J. F., Zins M., Niedhammer I., Leclerc A. Health problems were the strongest predictors of attrition during follow-up of the GAZEL cohort. Journal of Clinical Epidemiology. 2006;59(11):1213–1221. doi: 10.1016/j.jclinepi.2006.02.020. [DOI] [PubMed] [Google Scholar]
- 69.Hansen W. B., Collins L. M., Malotte C. K., Johnson C. A., Fielding J. E. Attrition in prevention research. Journal of Behavioral Medicine. 1985;8(3):261–275. doi: 10.1007/BF00870313. [DOI] [PubMed] [Google Scholar]
- 70.McCoy T. P., Ip E. H., Blocker J. N., et al. Attrition bias in a U.S. internet survey of alcohol use among college freshmen. Journal of Studies on Alcohol and Drugs. 2009;70(4):606–614. doi: 10.15288/jsad.2009.70.606. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Paschall M. J., Freisthler B. Does heavy drinking affect academic performance in college? Findings from a prospective study of high achievers. Journal of Studies on Alcohol. 2003;64(4):515–519. doi: 10.15288/jsa.2003.64.515. [DOI] [PubMed] [Google Scholar]
- 72.Snow D. L., Tebes J. K., Arthur M. W. Panel attrition and external validity in adolescent substance use research. Journal of Consulting and Clinical Psychology. 1992;60(5):804–807. doi: 10.1037//0022-006X.60.5.804. [DOI] [PubMed] [Google Scholar]
- 73.Wild T. C., Cunningham J., Adlaf E. Nonresponse in a follow-up to a representative telephone survey of adult drinkers. Journal of Studies on Alcohol. 2001;62(2):257–261. doi: 10.15288/jsa.2001.62.257. [DOI] [PubMed] [Google Scholar]
- 74.Garcia M., Fernandez E., Schiaffino A., Borrell C., Marti M., Borras J. M. Attrition in a population-based cohort eight years after baseline interview: the Cornella Health Interview Survey Follow-up (CHIS.FU) Study. Annals of Epidemiology. 2005;15(2):98–104. doi: 10.1016/j.annepidem.2004.06.002. [DOI] [PubMed] [Google Scholar]
- 75.Thygesen L. C., Johansen C., Keiding N., Giovannucci E., Grønbæk M. Effects of sample attrition in a longitudinal study of the association between alcohol intake and all-cause mortality. Addiction. 2008;103(7):1149–1159. doi: 10.1111/j.1360-0443.2008.02241.x. [DOI] [PubMed] [Google Scholar]
- 76.Psaty B. M., Cheadle A., Koepsell T. D., et al. Race- and ethnicity-specific characteristics of participants lost to follow-up in a telephone cohort. American Journal of Epidemiology. 1994;140(2):161–171. doi: 10.1093/oxfordjournals.aje.a117226. [DOI] [PubMed] [Google Scholar]
- 77.Crawford A. A comparison of participants and non-participants from a British general population survey of alcohol drinking practices. Journal of the Market Research Society. 1986;28:291–297. [Google Scholar]
- 78.Hill A., Roberts J., Ewings P., Gunnell D. Non-response bias in a lifestyle survey. Journal of Public Health Medicine. 1997;19(2):203–207. doi: 10.1093/oxfordjournals.pubmed.a024610. [DOI] [PubMed] [Google Scholar]
- 79.Lahaut V. M. H. C. J., Jansen H. A. M., van de Mheen D., Garretsen H. F. L., Verdurmen J. E. E., van Dijk A. Estimating non-response bias in a survey on alcohol consumption: comparison of response waves. Alcohol & Alcoholism. 2003;38(2):128–134. doi: 10.1093/alcalc/agg044. [DOI] [PubMed] [Google Scholar]
- 80.Trinkoff A. M., Storr C. L. Collecting substance use data with an anonymous mailed survey. Drug and Alcohol Dependence. 1997;48(1):1–8. doi: 10.1016/S0376-8716(97)00095-1. [DOI] [PubMed] [Google Scholar]
- 81.Wilson P. Improving the methodology of drinking surveys. The Statistician. 1981;30(3):159–167. [Google Scholar]
- 82.Lin I.-F., Schaeffer N. C. Using survey participants to estimate the impact of nonparticipation. Public Opinion Quarterly. 1995;59(2):236–258. doi: 10.1086/269471. [DOI] [Google Scholar]
- 83.Gfroerer J., Lessler J., Parsley T. Studies of nonresponse and measurement error in the National Households Survey on Drug Abuse. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 273–295. (NIDA Research Monograph 167, NIH Publication No. 97-4147). [PubMed] [Google Scholar]
- 84.Owens L., Johnson T. P., Rourke D. O. Culture and item nonresponse in health surveys. In: Cynamon M. L., Kulka R. A., editors. Proceedings of the 7th Conference on Health Survey Research Methods; 2001; Hyattsville, Md, USA. pp. 69–74. [Google Scholar]
- 85.Witt M., Pantula J., Folsom R., Cox C. Item nonresponse in 1988. In: Turner C. F., Lessler J. T., Gfroerer J. C., editors. Survey Measurement of Drug Use: Methodological Studies. Rockville, Md, USA: National Institute on Drug Abuse; 1988. pp. 85–108. [Google Scholar]
- 86.Aquilino W. S. Telephone versus face-to-face interviewing for household drug use surveys. International Journal of the Addictions. 1992;27(1):71–91. doi: 10.3109/10826089109063463. [DOI] [PubMed] [Google Scholar]
- 87.Stueve A., O'Donnell L. N. Item nonresponse to questions about sex, substance use, and school: results from the Reach for Health Study of African American and Hispanic young adolescents. In: Bancroft J. H. I., editor. Researching Sexual Behavior: Methodological Issues. Bloomington, Ind, USA: Indiana University Press; 1997. pp. 376–389. [Google Scholar]
- 88.Lewin T. Teenage drinking a problem but not in way study found. New York Times. 2002 [Google Scholar]
- 89.Lohr S. L. Sampling: Design and Analysis. 2nd. Boston, Mass, USA: Brooks/Cole; 2010. [Google Scholar]
- 90.Hubbard M., Pantula J., Lessler J. Effects of decomposition of complex concepts. In: Turner C. F., Lessler J. T., Gfroerer J. C., editors. Survey Measurement of Drug Use. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 245–266. [Google Scholar]
- 91.Ouelett L. J., Cagle H. H., Fisher D. G. “Crack” versus “rock” cocaine: the importance of local nomenclature in drug research and education. Contemporary Drug Problems. 1997;24:219–237. [Google Scholar]
- 92.Greenfield T. K., Kerr W. C. Alcohol measurement methodology in epidemiology: recent advances and opportunities. Addiction. 2008;103(7):1082–1099. doi: 10.1111/j.1360-0443.2008.02197.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Fowler F. J., Stringfellow V. L. Learning from experience: estimating teen use of alcohol, cigarettes, and marijuana from three survey protocols. Journal of Drug Issues. 2001;31(3):643–664. [Google Scholar]
- 94.Kroutil L. A., Vorburger M., Aldworth J., Colliver J. D. Estimated drug use based on direct questioning and open-ended questions: responses in the 2006 National Survey on Drug Use and Health. International Journal of Methods in Psychiatric Research. 2010;19(2):74–87. doi: 10.1002/mpr.302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Rehm J., Greenfield T. K., Walsh G., Xie X., Robson L., Single E. Assessment methods for alcohol consumption, prevalence of high risk drinking and harm: a sensitivity analysis. International Journal of Epidemiology. 1999;28(2):219–224. doi: 10.1093/ije/28.2.219. [DOI] [PubMed] [Google Scholar]
- 96.Hilton M. E. A comparison of a prospective diary and two summary recall techniques for recording alcohol consumption. British Journal of Addiction. 1989;84(9):1085–1092. doi: 10.1111/j.1360-0443.1989.tb00792.x. [DOI] [PubMed] [Google Scholar]
- 97.Dawson D. A. Volume of ethanol consumption: effects of different approaches to measurement. Journal of Studies on Alcohol. 1998;59(2):191–197. doi: 10.15288/jsa.1998.59.191. [DOI] [PubMed] [Google Scholar]
- 98.Sobell L. C., Sobell M. B. Alcohol consumption measures. In: Allen V. B., Wilson J. P., editors. Assessing Alcohol Problems: A Guide for Clinicians and Researchers. 2nd. Bethesda, Md, USA: National Institute on Alcohol Abuse and Alcoholism; 2003. pp. 75–99. (NIH Pub. No. 03-3745). [Google Scholar]
- 99.Midanik L. T. Comparing usual quantity/frequency and graduated frequency scales to assess yearly alcohol consumption: results from the 1990 US national alcohol survey. Addiction. 1994;89(4):407–412. doi: 10.1111/j.1360-0443.1994.tb00914.x. [DOI] [PubMed] [Google Scholar]
- 100.Poikolainen K., Podkletnova I., Alho H. Accuracy of quantity-frequency and graduated frequency questionnaires in measuring alcohol intake: Comparison with daily diary and commonly used laboratory markers. Alcohol & Alcoholism. 2002;37(6):573–576. doi: 10.1093/alcalc/37.6.573. [DOI] [PubMed] [Google Scholar]
- 101.Bloomfield K., Hope A., Kraus L. Alcohol survey measures for Europe: a literature review. Drugs: Education, Prevention and Policy. 2013;20(5):348–360. doi: 10.3109/09687637.2011.642906. [DOI] [Google Scholar]
- 102.Stockwell T., Zhao J., Chikritzhs T., Greenfield T. K. What did you drink yesterday? Public health relevance of a recent recall method used in the 2004 Australian National Drug Strategy Household Survey. Addiction. 2008;103(6):919–928. doi: 10.1111/j.1360-0443.2008.02219.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Poikolainen K., Kärkkäinen P. Diary gives more accurate information about alcohol consumption than questionnaire. Drug and Alcohol Dependence. 1983;11(2):209–216. doi: 10.1016/0376-8716(83)90080-7. [DOI] [PubMed] [Google Scholar]
- 104.Schwarz N. Self-reports: how the questions shape the answers. American Psychologist. 1999;54(2):93–105. doi: 10.1037/0003-066X.54.2.93. [DOI] [Google Scholar]
- 105.Poikolainen K., Kärkkäinen P. Nature of questionnaire options affects estimates of alcohol intake. Journal of Studies on Alcohol. 1985;46(3):219–222. doi: 10.15288/jsa.1985.46.219. [DOI] [PubMed] [Google Scholar]
- 106.Sobell L. C., Cellucci T., Nirenberg T. D., Sobell M. B. Do quantity-frequency data underestimate drinking-related health risks? American Journal of Public Health. 1982;72(8):823–828. doi: 10.2105/AJPH.72.8.823. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 107.Hasin D., Carpenter K. M. Difficulties with questions on usual drinking and the measurement of alcohol consumption. Alcoholism: Clinical and Experimental Research. 1998;22(3):580–584. doi: 10.1111/j.1530-0277.1998.tb04296.x. [DOI] [PubMed] [Google Scholar]
- 108.Tourangeau R. Remembering what happened: memory errors and survey reports. In: Stone A. A., Turkkan J. S., Bachrach C. A., Jobe J. B., Kurtzman H. S., Cain V. S., editors. The Science of Self Report: Implications for Research and Practice. Mahwah, NJ, USA: Lawrence Erlbaum Associates; 2000. [Google Scholar]
- 109.Bachman J. G., O'Malley P. M. When four months equal a year: inconsistencies in student reports of drug use. Public Opinion Quarterly. 1981;45(4):536–548. doi: 10.1086/268686. [DOI] [Google Scholar]
- 110.Simpura J., Poikolainen K. Accuracy of retrospective measurement of individual alcohol consumption in men; a reinterview after 18 years. Journal of Studies on Alcohol. 1983;44(5):911–917. doi: 10.15288/jsa.1983.44.911. [DOI] [PubMed] [Google Scholar]
- 111.Cho Y. I., Johnson T. P., Fendrich M. Monthly variations in self-reports of alcohol consumption. Journal of Studies on Alcohol. 2001;62(2):268–272. doi: 10.15288/jsa.2001.62.268. [DOI] [PubMed] [Google Scholar]
- 112.Grant B. F., Dawson D. A. Age of onset of drug use and its association with DSM-IV drug abuse and dependence: results from the national longitudinal alcohol epidemiologic survey. Journal of Substance Abuse. 1998;10(2):163–173. doi: 10.1016/S0899-3289(99)80131-X. [DOI] [PubMed] [Google Scholar]
- 113.Shillington A. M., Woodruff S. I., Clapp J. D., Reed M. B., Lemus H. Self-reported age of onset and telescoping for cigarettes, alcohol, and marijuana: across eight years of the National Longitudinal Survey of Youth. Journal of Child and Adolescent Substance Abuse. 2012;21(4):333–348. doi: 10.1080/1067828X.2012.710026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Engels R., Knibbe R. A., Drop M. J. Inconsistencies in adolescents' self-reports of initiation of alcohol and tobacco use. Addictive Behaviors. 1997;22(5):613–623. doi: 10.1016/S0306-4603(96)00067-6. [DOI] [PubMed] [Google Scholar]
- 115.Grant B. F., Harford T. C., Dawson D. A., Chou P. S., Pickering R. P. The alcohol use disorder and associated disabilities interview schedule (AUDADIS): reliability of alcohol and drug modules in a general population sample. Drug and Alcohol Dependence. 1995;39(1):37–44. doi: 10.1016/0376-8716(95)01134-K. [DOI] [PubMed] [Google Scholar]
- 116.Humphrey J. A., Friedman J. The onset of drinking and intoxication among university students. Journal of Studies on Alcohol. 1986;47(6):455–458. doi: 10.15288/jsa.1986.47.455. [DOI] [PubMed] [Google Scholar]
- 117.Johnson T. P., Mott J. A. The reliability of self-reported age of onset of tobacco, alcohol and illicit drug use. Addiction. 2001;96(8):1187–1198. doi: 10.1046/j.1360-0443.2001.968118711.x. [DOI] [PubMed] [Google Scholar]
- 118.Prause J., Dooley D., Ham-Rowbottom K. A., Emptage N. Alcohol drinking onset: a reliability study. Journal of Child and Adolescent Substance Abuse. 2007;16(4):79–90. doi: 10.1300/J029v16n04_05. [DOI] [Google Scholar]
- 119.Shillington A. M., Clapp J. D. Self-report stability of adolescent substance use: are there differences for gender, ethnicity and age? Drug and Alcohol Dependence. 2000;60(1):19–27. doi: 10.1016/S0376-8716(99)00137-4. [DOI] [PubMed] [Google Scholar]
- 120.Shillington A. M., Clapp J. D., Reed M. B., Woodruff S. I. Adolescent alcohol use self-report stability: a decade of panel study data. Journal of Child and Adolescent Substance Abuse. 2011;20(1):63–81. doi: 10.1080/1067828X.2011.534366. [DOI] [Google Scholar]
- 121.Shillington A. M., Reed M. B., Clapp J. D., Woodruff S. I. Testing the length of time theory of recall decay: examining substance use report stability with 10 years of national longitudinal survey of youth data. Substance Use & Misuse. 2011;46(9):1105–1112. doi: 10.3109/10826084.2010.548436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.Aquilino W. S., Sciuto L. A. L. Effects of interview mode on self-reported drug use. Public Opinion Quarterly. 1990;54(3):362–393. doi: 10.1086/269212. [DOI] [Google Scholar]
- 123.Aquilino W. S. Interview mode effects in surveys of drug and alcohol use: a field experiment. Public Opinion Quarterly. 1994;58(2):210–240. doi: 10.1086/269419. [DOI] [Google Scholar]
- 124.Dotinga A., van den Eijnden R. J. J. M., Bosveld W., Garretsen H. F. L. The effect of data collection mode and ethnicity of interviewer on response rates and self-reported alcohol use among Turks and Moroccans in the Netherlands: an experimental study. Alcohol and Alcoholism. 2005;40(3):242–248. doi: 10.1093/alcalc/agh144. [DOI] [PubMed] [Google Scholar]
- 125.Duffy J. C., Waterton J. J. Under-reporting of alcohol consumption in sample surveys: the effect of computer interviewing in fieldwork. British Journal of Addiction. 1984;79(3):303–308. doi: 10.1111/j.1360-0443.1984.tb00278.x. [DOI] [PubMed] [Google Scholar]
- 126.Gfroerer J., Hughes A. Collecting data on illicit drug use by phone. In: Turner C., Lessler J., Gfroerer J., editors. Survey Measurement of Drug Use: Methodological Studies. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 277–295. (DHHS Pub. No., ADM 92-1929). [Google Scholar]
- 127.Hoyt G. M., Chaloupka F. J. Effect of survey conditions on self-reported substance use. Contemporary Economic Policy. 1994;12:109–121. [Google Scholar]
- 128.Schober S., Caces M. F., Pergamit M., Branden L. Effects of mode of administration on reporting of drug use in the National Longitudinal Survey. In: Turner C., Lessler J., Gfroerer J., editors. Survey Measurement of Drug Use: Methodological Studies. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 267–276. (DHHS Publication No., ADM 92-1929). [Google Scholar]
- 129.Tourangeau R., Smith T. W. Asking sensitive questions: the impact of data collection mode, question format, and question context. Public Opinion Quarterly. 1996;60(2):275–304. doi: 10.1086/297751. [DOI] [Google Scholar]
- 130.Turner C. F., Lessler J. T., Devore J. Effects of mode of administration and wording on reporting of drug use. In: Turner C., Lessler J., Gfroerer J., editors. Survey Measurement of Drug Use: Methodological Studies. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 177–220. (DHHS Pub. No., ADM 92-1929). [Google Scholar]
- 131.Tourangeau R., Rips L. J., Raskinski K. The Psychology of Survey Response. Cambridge, UK: Cambridge University Press; 2000. [Google Scholar]
- 132.Brener N. D., Eaton D. K., Kann L., et al. The association of survey setting and mode with self-reported health risk behaviors among high school students. Public Opinion Quarterly. 2006;70(3):354–374. doi: 10.1093/poq/nfl003. [DOI] [Google Scholar]
- 133.Chromy J., Davis T., Packer L., Gforerer J. Mode effects on substance use measures: comparison of 1999 CAI and PAPI data. In: Gfroerer J., Eyerman J., Chromy J., editors. Redesigning an Ongoing National Household Survey: Methodological Issues. Rockville, Md, USA: Substance Abuse and Mental Health Services Administration; 2002. pp. 135–159. (DHHS Publication No. MA 03-3768). [Google Scholar]
- 134.Eaton D. K., Brener N. D., Kann L., et al. Comparison of paper-and-pencil versus web administration of the youth risk behavior survey (YRBS): risk behavior prevalence estimates. Evaluation Review. 2010;34(2):137–153. doi: 10.1177/0193841X10362491. [DOI] [PubMed] [Google Scholar]
- 135.Ramo D. E., Liu H., Prochaska J. J. Reliability and validity of young adults' anonymous online reports of marijuana use and thoughts about use. Psychology of Addictive Behaviors. 2012;26(4):801–811. doi: 10.1037/a0026201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Hines D. A., Douglas E. M., Mahmood S. The effects of survey administration on disclosure rates to sensitive items among men: a comparison of an internet panel sample with a RDD telephone sample. Computers in Human Behavior. 2010;26(6):1327–1335. doi: 10.1016/j.chb.2010.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.McCabe S. E., Diez A., Boyd C. J., Nelson T. F., Weitzman E. R. Comparing web and mail responses in a mixed mode survey in college alcohol use research. Addictive Behaviors. 2006;31(9):1619–1627. doi: 10.1016/j.addbeh.2005.12.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Khadjesari Z., Murray E., Kalaitzaki E., et al. Test-retest reliability of an online measure of past week alcohol consumption (the TOT-AL), and comparison with face-to-face interview. Addictive Behaviors. 2009;34(4):337–342. doi: 10.1016/j.addbeh.2008.11.010. [DOI] [PubMed] [Google Scholar]
- 139.Gfroerer J. C., Hughes A. L. The feasibility of collecting drug abuse data by telephone. Public Health Reports. 1991;106(4):384–393. [PMC free article] [PubMed] [Google Scholar]
- 140.Johnson T. P., Hougland J., Clayton R. Obtaining reports of sensitive behaviors: a comparison of substance use reports from telephone and face-to-face interviews. Social Science Quarterly. 1989;70:174–183. [Google Scholar]
- 141.Greenfield T. K., Midanik L. T., Rogers J. D. Effects of telephone versus face-to-face interview modes on reports of alcohol consumption. Addiction. 2000;95(2):277–284. doi: 10.1046/j.1360-0443.2000.95227714.x. [DOI] [PubMed] [Google Scholar]
- 142.Midanik L. T., Greenfield T. K. Telephone versus in-person interviews for alcohol use: results of the 2000 National Alcohol Survey. Drug and Alcohol Dependence. 2003;72(3):209–214. doi: 10.1016/S0376-8716(03)00204-7. [DOI] [PubMed] [Google Scholar]
- 143.Midanik L. T., Greenfield T. K., Rogers J. D. Reports of alcohol-related harm: telephone versus face-to-face interviews. Journal of Studies on Alcohol. 2001;62(1):74–78. doi: 10.15288/jsa.2001.62.74. [DOI] [PubMed] [Google Scholar]
- 144.Gribble J. N., Miller H. G., Cooley P. C., Catania J. A., Pollack L., Turner C. F. The impact of T-ACASI interviewing on reported drug use among men who have sex with men. Substance Use and Misuse. 2000;35(6–8):869–1101. doi: 10.3109/10826080009148425. [DOI] [PubMed] [Google Scholar]
- 145.Perrine M. W., Mundt J. C., Searles J. S., Lester L. S. Validation of daily self-reported alcohol consumption using interactive voice response (IVR) technology. Journal of Studies on Alcohol. 1995;56(5):487–490. doi: 10.15288/jsa.1995.56.487. [DOI] [PubMed] [Google Scholar]
- 146.Jabine T., Straf M., Tanur J., Tourangeau R. Cognitive Aspects of Survey Methodology: Building a Bridge between Disciplines. Washington, DC, USA: National Academy Press; 1984. [Google Scholar]
- 147.Morral A. R., McCaffrey D. F., Chien S. Measurement of adolescent drug use. Journal of Psychoactive Drugs. 2003;35(3):301–309. doi: 10.1080/02791072.2003.10400013. [DOI] [PubMed] [Google Scholar]
- 148.Harris K. M., Griffin B. A., McCaffrey D. F., Morral A. R. Inconsistencies in self-reported drug use by adolescents in substance abuse treatment: implications for outcome and performance measurements. Journal of Substance Abuse Treatment. 2008;34(3):347–355. doi: 10.1016/j.jsat.2007.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Swadi H. Validating and improving the validity of self-reports in adolescent substance misuse surveys. Journal of Drug Issues. 1990;20(3):473–486. [Google Scholar]
- 150.Johnston L. D., O’Malley P. M. The recanting or earlier reported drug use by young adults. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 59–80. (NIDA Research Monograph 167, NIH Publication No. 97-4147). [PubMed] [Google Scholar]
- 151.Devos-Comby L., Lange J. E. “My drink is larger than yours”? A literature review of self-defined drink sizes and standard drinks. Current Drug Abuse Reviews. 2008;1(2):162–176. doi: 10.2174/1874473710801020162. [DOI] [PubMed] [Google Scholar]
- 152.Kerr W. C., Stockwell T. Understanding standard drinks and drinking guidelines. Drug and Alcohol Review. 2012;31(2):200–205. doi: 10.1111/j.1465-3362.2011.00374.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 153.de la Rosa M. R., Adrados J.-L. R. Drug Abuse Among Minority Youth: Advances in Research and Methodology. Rockville, Md, USA: National Institute on Drug Abuse; 1993. (NIDA Research Monograph 130). [Google Scholar]
- 154.Room R., Janca A., Bennett L. A., et al. WHO cross-cultural applicability research on diagnosis and assessment of substance use disorders: an overview of methods and selected results. Addiction. 1996;91(2):199–220. doi: 10.1111/j.1360-0443.1996.tb03176.x. [DOI] [PubMed] [Google Scholar]
- 155.Room R. Taking account of cultural and societal influences on substance use diagnoses and criteria. Focus. 2007;5(2):199–207. doi: 10.1111/j.1360-0443.2006.01597.x. [DOI] [PubMed] [Google Scholar]
- 156.Gardner B., Tang V. Reflecting on non-reflective action: an exploratory think-aloud study of self-report habit measures. British Journal of Health Psychology. 2014;19(2):258–273. doi: 10.1111/bjhp.12060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Midanik L. T., Hines A. C. 'Unstandard' ways of answering standard questions: protocol analysis in alcohol survey research. Drug and Alcohol Dependence. 1991;27(3):245–252. doi: 10.1016/0376-8716(91)90007-L. [DOI] [PubMed] [Google Scholar]
- 158.Ridolfo H. Testing of the National HIV Behavioral Surveillance System: Results of Interviews Conducted 1/13/2011-4/5/2011. Hyattsville, Md, USA: Questionnaire Design Research Laboratory, National Center for Health Statistics, Centers for Disease Control and Prevention; 2011. http://wwwn.cdc.gov/qbank/report/Ridolfo_NCHS_2011_NHBSS%20HIV.pdf#page=43. [Google Scholar]
- 159.Thrasher J. F., Quah A. C. K., Dominick G., et al. Using cognitive interviewing and behavioral coding to determine measurement equivalence across linguistic and cultural groups: an example from the international tobacco control policy evaluation project. Field Methods. 2011;23(4):439–460. doi: 10.1177/1525822X11418176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 160.Friedman W. J. Memory for the time of past events. Psychological Bulletin. 1993;113(1):44–66. doi: 10.1037/0033-2909.113.1.44. [DOI] [Google Scholar]
- 161.Sudman S., Bradburn N. M., Schwarz N. Thinking about Answers: The Applications of Cognitive Processes to Survey Methodology. San Francisco, Calif, USA: Jossey-Bass; 1996. [Google Scholar]
- 162.Babor T. F., Steinberg K., Anton R., del Boca F. Talk is cheap: measuring drinking outcomes in clinical trials. Journal of Studies on Alcohol. 2000;61(1):55–63. doi: 10.15288/jsa.2000.61.55. [DOI] [PubMed] [Google Scholar]
- 163.Van Gorp W. G., Wilkins J. N., Hinkin C. H., et al. Declarative and procedural memory functioning in abstinent cocaine abusers. Archives of General Psychiatry. 1999;56(1):85–89. doi: 10.1001/archpsyc.56.1.85. [DOI] [PubMed] [Google Scholar]
- 164.Ardila A., Rosselli M., Strumwasser S. Neuropsychological deficits in chronic cocaine abusers. International Journal of Neuroscience. 1991;57(1-2):73–79. doi: 10.3109/00207459109150348. [DOI] [PubMed] [Google Scholar]
- 165.Bolla K. I., McCann U. D., Ricaurte G. A. Memory impairment in abstinent MDMA (“Ecstasy”) users. Neurology. 1998;51(6):1532–1537. doi: 10.1212/WNL.51.6.1532. [DOI] [PubMed] [Google Scholar]
- 166.Morgan M. J. Memory deficits associated with recreational use of “ecstasy” (MDMA) Psychopharmacology. 1999;141(1):30–36. doi: 10.1007/s002130050803. [DOI] [PubMed] [Google Scholar]
- 167.Parrott A. C., Lees A., Garnham N. J., Jones M., Wesnes K. Cognitive performance in recreational users of MDMA or “ecstasy”: evidence for memory deficits. Journal of Psychopharmacology. 1998;12(1):79–83. doi: 10.1177/026988119801200110. [DOI] [PubMed] [Google Scholar]
- 168.Mensch B. S., Kandel D. B. Underreporting of substance use in a national longitudinal youth cohort: individual and interviewer effects. Public Opinion Quarterly. 1988;52(1):100–124. doi: 10.1086/269084. [DOI] [Google Scholar]
- 169.Belli R. F. Calendar and Time Diary Methods in Life Course Research. Los Angeles, Calif, USA: Sage Publications; 2008. [Google Scholar]
- 170.Stone A. A., Turkkan J. S., Bachrach C. A., Jobe J. B., Kurtzman H. S., Cain V. S. The Science of Self-Report: Implications for Research and Practice. Mahwah, NJ, USA: Lawrence Erlbaum Associates; 2000. [Google Scholar]
- 171.Hubbard M. Laboratory experiments testing new questioning strategies. In: Turner C. F., Lessler J. T., Gfroerer J. C., editors. Survey Measurement of Drug Use. Rockville, Md, USA: National Institute on Drug Abuse; 1992. pp. 53–81. [Google Scholar]
- 172.Bradburn N. M., Sudman S. Improving Interview Method and Questionnaire Design: Response Effects to Threatening Questions in Survey Research. San Francisco, Calif, USA: Jossey-Bass Publishers; 1979. [Google Scholar]
- 173.Crowne D. P., Marlowe D. The Approval Motive: Studies in Evaluative Dependence. New York, NY, USA: John Wiley & Sons; 1964. [Google Scholar]
- 174.Paulhus D. L. Measurement and control of response bias. In: Robinson J. P., Shaver P. R., editors. Measures of Personality and Social Psychological Attitudes. San Diego, Calif, USA: Academic Press; 1991. pp. 17–59. [Google Scholar]
- 175.Tourangeau R., Yan T. Sensitive questions in surveys. Psychological Bulletin. 2007;133(5):859–883. doi: 10.1037/0033-2909.133.5.859. [DOI] [PubMed] [Google Scholar]
- 176.Traugott M. W., Katosh J. P. Response validity in surveysof voting behavior. Public Opinion Quarterly. 1979;43(3):359–377. doi: 10.1086/268527. [DOI] [Google Scholar]
- 177.Adams S. A., Matthews C. E., Ebbeling C. B., et al. The effect of social desirability and social approval on self-reports of physical activity. American Journal of Epidemiology. 2005;161(4):389–398. doi: 10.1093/aje/kwi054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.Fendrich M., Johnson T. P., Wislar J. S., Hubbell A., Spiehler V. The utility of drug testing in epidemiological research: results from a general population survey. Addiction. 2004;99(2):197–208. doi: 10.1111/j.1360-0443.2003.00632.x. [DOI] [PubMed] [Google Scholar]
- 179.Harrel M. The validity of self-reported drug use data: the accuracy of responses on confidential self-administered answer sheets. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 37–58. (NIDA Research Monograph 167). [Google Scholar]
- 180.Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review. Quality and Quantity. 2013;47(4):2025–2047. doi: 10.1007/s11135-011-9640-9. [DOI] [Google Scholar]
- 181.Robbins C., Clayton R. R. Gender-related differences in psychoactive drug use among older adults. Journal of Drug Issues. 1989;19(2):207–219. [Google Scholar]
- 182.O'Malley P. M., Bachman J. G., Johnston L. D. Reliability and consistency in self-reports of drug use. The International Journal of the Addictions. 1983;18(6):805–824. doi: 10.3109/10826088309033049. [DOI] [PubMed] [Google Scholar]
- 183.Mieczkowski T. The accuracy of self-reported drug use: an evaluation and analysis of new data. In: Weiner N. A., Wolfgang M. E., editors. Pathways to Criminal Violence. Newbury Park, Calif, USA: Sage Publications; 1989. pp. 275–302. [Google Scholar]
- 184.Hser Y. Self-reported drug use: results of selected empirical investigations of validity. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 320–343. (NIDA Research Monograph 167, NIH Pub No. 97-4147). [PubMed] [Google Scholar]
- 185.Luetgert M. J., Armstrong A. H. Methodological issues in drug usage surveys: anonymity, recency, and frequency. International Journal of the Addictions. 1973;8(4):683–689. doi: 10.3109/10826087309057494. [DOI] [PubMed] [Google Scholar]
- 186.Malvin J. H., Moskowitz J. M. Anonymous versus identifiable self-reports of adolescent drug attitudes, intentions, and use. Public Opinion Quarterly. 1983;47(4):557–566. doi: 10.1086/268812. [DOI] [Google Scholar]
- 187.Moore R. S., Ames G. M. Survey confidentiality vs. anonymity: young men's self-reported substance use. Journal of Alcohol & Drug Education. 2002;47(2):32–41. [Google Scholar]
- 188.O'Malley P. M., Johnston L. D., Bachman J. G., Schulenberg J. A comparison of confidential versus anonymous survey procedures: effects on reporting of drug use and related attitudes and beliefs in a national study of students. Journal of Drug Issues. 2000;30(1):35–54. [Google Scholar]
- 189.Pleck J. H., Sonenstein F. L., Ku L. Black-white differences in adolescent males’ substance use: are they explained by underreporting by Blacks ? Journal of Gender, Culture, and Health. 1996;1:247–265. [Google Scholar]
- 190.Watten R. G. Coping styles in abstainers from alcohol. Psychopathology. 1996;29(6):340–346. doi: 10.1159/000285016. [DOI] [PubMed] [Google Scholar]
- 191.Welte J. W., Russell M. Influence of socially desirable responding in a study of stress and substance abuse. Alcoholism: Clinical and Experimental Research. 1993;17(4):758–761. doi: 10.1111/j.1530-0277.1993.tb00836.x. [DOI] [PubMed] [Google Scholar]
- 192.Johnson T. P., Fendrich M., Mackesy-Amiti M. E. An evaluation of the validity of the Crowne-Marlowe need for approval scale. Quality and Quantity. 2012;46(6):1883–1896. doi: 10.1007/s11135-011-9563-5. [DOI] [Google Scholar]
- 193.Johnson T. P., Bowman P. J. Cross-cultural sources of measurement error in substance use surveys. Substance Use and Misuse. 2003;38(10):1447–1566. doi: 10.1081/JA-120023394. [DOI] [PubMed] [Google Scholar]
- 194.Fendrich M., Johnson T. P. Race/ethnicity differences in the validity of self-reported drug use: Results from a household survey. Journal of Urban Health. 2005;82(3):iii67–iii81. doi: 10.1093/jurban/jti065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 195.Ledgerwood D. M., Goldberger B. A., Risk N. K., Lewis C. E., Kato Price R. Comparison between self-report and hair analysis of illicit drug use in a community sample of middle-aged men. Addictive Behaviors. 2008;33(9):1131–1139. doi: 10.1016/j.addbeh.2008.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 196.Del Boca F. K., Darkes J. The validity of self-reports of alcohol consumption: state of the science and challenges for research. Addiction. 2003;98, supplement 2:1–12. doi: 10.1046/j.1359-6357.2003.00586.x. [DOI] [PubMed] [Google Scholar]
- 197.Miller P. V. Review: is “up” right? The national household survey on drug abuse. Public Opinion Quarterly. 1997;61(4):627–641. doi: 10.1086/297821. [DOI] [Google Scholar]
- 198.Harrison L. D. Understanding the differences in youth drug prevalence rates produced by the MTF, NHSDA, and YRBS studies. Journal of Drug Issues. 2001;31(3):665–694. [Google Scholar]
- 199.Brener N. D., Billy J. O. G., Grady W. R. Assessment of factors affecting the validity of self-reported health-risk behavior among adolescents: evidence from the scientific literature. Journal of Adolescent Health. 2003;33(6):436–457. doi: 10.1016/S1054-139X(03)00052-1. [DOI] [PubMed] [Google Scholar]
- 200.Fendrich M. The undeniable problem of recanting. Addiction. 2005;100(2):143–144. doi: 10.1111/j.1360-0443.2005.00993.x. [DOI] [PubMed] [Google Scholar]
- 201.Percy A., McAlister S., Higgins K., McCrystal P., Thornton M. Response consistency in young adolescents' drug use self-reports: a recanting rate analysis. Addiction. 2005;100(2):189–196. doi: 10.1111/j.1360-0443.2004.00943.x. [DOI] [PubMed] [Google Scholar]
- 202.Barnea Z., Rahav G., Teichman M. The reliability and consistency of self-reports on substance use in a longitudinal study. British Journal of Addiction. 1987;82(8):891–898. doi: 10.1111/j.1360-0443.1987.tb03909.x. [DOI] [PubMed] [Google Scholar]
- 203.Richter L., Johnson P. B. Current methods of assessing substance use: a review of strengths, problems, and developments. Journal of Drug Issues. 2001;31(4):809–832. doi: 10.1177/002204260103100401. [DOI] [Google Scholar]
- 204.Winters K. C., Stinchfield R. D., Henly G. A., Schwartz R. H. Validity of adolescent self-report of alcohol and other drug involvement. International Journal of the Addictions. 1990;25(11A):1379–1395. doi: 10.3109/10826089009068469. [DOI] [PubMed] [Google Scholar]
- 205.Poulin C., MacNeil P., Mitic W. The validity of a province-wide student drug use survey: lessons in design. Canadian Journal of Public Health. 1993;84(4):259–264. [PubMed] [Google Scholar]
- 206.Petzel T. P., Johnson J. E., McKillip J. Response bias in drug surveys. Journal of Consulting and Clinical Psychology. 1973;40(3):437–439. doi: 10.1037/h0034439. [DOI] [PubMed] [Google Scholar]
- 207.Farrell A. D., Danish S. J., Howard C. W. Evaluation of data screening methods in surveys of adolescents’ drug use. Psychological Assessment. 1991;3(2):295–298. doi: 10.1037/1040-3590.3.2.295. [DOI] [Google Scholar]
- 208.Single E., Kandel D., Johnson B. The reliability and validity of drug use responses in a large scale longitudinal survey. Journal of Drug Issues. 1975;5:426–443. [Google Scholar]
- 209.Whitehead P., Smart R. Validity and reliability of self-reported drug use. Canadian Journal of Criminology and Corrections. 1972;14:1–8. [Google Scholar]
- 210.Wish E. D., Hoffman J. A., Nemes S. The validity of self-reports of drug use at treatment admission and at follow-up: comparisons with urinanalysis and hair assays. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 200–226. (NIDA Research Monograph 167, NIH Pub No. 97-4147). [Google Scholar]
- 211.Willis G. The use of the psychological laboratory to study sensitive survey topics. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 416–438. (NIDA Research Monograph 167, NIH Pub No. 97-4147). [PubMed] [Google Scholar]
- 212.Merline A., Jager J., Schulenberg J. E. Adolescent risk factors for adult alcohol use and abuse: stability and change of predictive value across early and middle adulthood. Addiction. 2008;103(1):84–99. doi: 10.1111/j.1360-0443.2008.02178.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 213.Osgood D. W., Johnston L. D., O'Malley P. M., Bachman J. G. The generality of deviance in late adolescence and early adulthood. American Sociological Review. 1988;53(1):81–93. doi: 10.2307/2095734. [DOI] [Google Scholar]
- 214.Caldwell T. M., Rodgers B., Power C., Clark C., Stansfeld S. A. Drinking histories of self-identified lifetime abstainers and occasional drinkers: findings from the 1958 British Birth Cohort Study. Alcohol and Alcoholism. 2006;41(6):650–654. doi: 10.1093/alcalc/agl088. [DOI] [PubMed] [Google Scholar]
- 215.Fendrich M., Rosenbaum D. P. Recanting of substance use reports in a longitudinal prevention study. Drug and Alcohol Dependence. 2003;70(3):241–253. doi: 10.1016/S0376-8716(03)00010-3. [DOI] [PubMed] [Google Scholar]
- 216.Fendrich M., Vaughn C. M. Diminished lifetime substance use over time: an inquiry into differential underreporting. Public Opinion Quarterly. 1994;58(1):96–123. doi: 10.1086/269410. [DOI] [Google Scholar]
- 217.Fendrich M., Mackesy-Amiti M. E. Decreased drug reporting in a cross-sectional student drug use survey. Journal of Substance Abuse. 2000;11(2):161–172. doi: 10.1016/S0899-3289(00)00018-3. [DOI] [PubMed] [Google Scholar]
- 218.Shillington A. M., Clapp J. D., Reed M. B. The stability of self-reported marijuana use across eight years of the National Longitudinal Survey of Youth. Journal of Child and Adolescent Substance Abuse. 2011;20(5):407–420. doi: 10.1080/1067828X.2011.614873. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 219.Shillington A. M., Roesch S. C., Reed M. B., Clapp J. D., Woodruff S. I. Typologies of recanting of lifetime cigarette, alcohol and marijuana use during a six-year longitudinal panel study. Drug and Alcohol Dependence. 2011;118(2-3):134–140. doi: 10.1016/j.drugalcdep.2011.03.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 220.Siddiqui O., Mott J. A., Anderson T. L., Flay B. R. Characteristics of inconsistent respondents who have “ever used” drugs in a school-based sample. Substance Use and Misuse. 1999;34(2):269–295. doi: 10.3109/10826089909035646. [DOI] [PubMed] [Google Scholar]
- 221.Martino S. C., McCaffrey D. F., Klein D. J., Ellickson P. L. Recanting of life-time inhalant use: how big a problem and what to make of it. Addiction. 2009;104(8):1373–1381. doi: 10.1111/j.1360-0443.2009.02598.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 222.Fendrich M., Johnson T. P., Sudman S., Wislar J. S., Spiehler V. Validity of drug use reporting in a high-risk community sample: a comparison of cocaine and heroin survey reports with hair tests. American Journal of Epidemiology. 1999;149(10):955–962. doi: 10.1093/oxfordjournals.aje.a009740. [DOI] [PubMed] [Google Scholar]
- 223.Colón H. M., Robles R. R., Sahai H. The validity of drug use responses in a household survey in Puerto Rico: comparison of survey responses of cocaine and heroin use with hair tests. International Journal of Epidemiology. 2001;30(5):1042–1049. doi: 10.1093/ije/30.5.1042. [DOI] [PubMed] [Google Scholar]
- 224.Colón H. M., Robles R. R., Sahai H. The validity of drug use self-reports among hard core drug users in a household survey in Puerto Rico: comparison of survey responses of cocaine and heroin use with hair tests. Drug and Alcohol Dependence. 2002;67(3):269–279. doi: 10.1016/S0376-8716(02)00081-9. [DOI] [PubMed] [Google Scholar]
- 225.Fendrich M., Mackesy-Amiti M. E., Johnson T. P. Validity of self-reported substance use in men who have sex with men: comparisons with a general population sample. Annals of Epidemiology. 2008;18(10):752–759. doi: 10.1016/j.annepidem.2008.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 226.Harrison L. D., Martin S. S., Enev T., Harrington D. Comparing Drug Testing and Self-Report of Drug Use among Youths and Young Adults in the General Population. Rockville, Md, USA: Substance Abuse and Mental Health Services Administration; 2007. (DHHS Pubulication No. SMA 07-4249, Methodology Series M-7). [Google Scholar]
- 227.Morral A. R., McCaffrey D., Iguchi M. Y. Hardcore drug users claim to be occasional users: drug use frequency underreporting. Drug and Alcohol Dependence. 2000;57(3):193–202. doi: 10.1016/S0376-8716(99)00048-4. [DOI] [PubMed] [Google Scholar]
- 228.Tassiopoulos K., Bernstein J., Heeren T., Levenson S., Hingson R., Bernstein E. Predictors of disclosure of continued cocaine use. Addictive Behaviors. 2006;31(1):80–89. doi: 10.1016/j.addbeh.2005.04.005. [DOI] [PubMed] [Google Scholar]
- 229.Wolff K., Farrell M., Marsden J., et al. A review of biological indicators of illicit drug use, practical considerations and clinical usefulness. Addiction. 1999;94(9):1279–1298. doi: 10.1046/j.1360-0443.1999.94912792.x. [DOI] [PubMed] [Google Scholar]
- 230.DeLauder S. F. Considering issues of racial bias in drug testing where hair is the matrix. Transforming Anthropology. 2004;11(2):54–59. [Google Scholar]
- 231.Leonard K., Dunn N. J., Jacob T. Drinking problems of alcoholics: correspondence between self and spouse reports. Addictive Behaviors. 1983;8(4):369–373. doi: 10.1016/0306-4603(83)90037-0. [DOI] [PubMed] [Google Scholar]
- 232.Satyanarayana V. A., Vaddiparti K., Chandra P. S., O'Leary C. C., Benegal V., Cottler L. B. Problem drinking among married men in India: comparison between husband's and wife's reports. Drug and Alcohol Review. 2010;29(5):557–562. doi: 10.1111/j.1465-3362.2010.00177.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 233.Sobell L. C., Agrawal S., Sobell M. B. Factors affecting agreement between alcohol abusers' and their collaterals' reports. Journal of Studies on Alcohol. 1997;58(4):405–413. doi: 10.15288/jsa.1997.58.405. [DOI] [PubMed] [Google Scholar]
- 234.Engels R. C. M. E., Van Der Vorst H., Deković M., Meeus W. Correspondence in collateral and self-reports on alcohol consumption: a within family analysis. Addictive Behaviors. 2007;32(5):1016–1030. doi: 10.1016/j.addbeh.2006.07.006. [DOI] [PubMed] [Google Scholar]
- 235.del Boca F. K., Noll J. A. Truth or consequences: the validity of self-report data in health services research on addictions. Addiction. 2000;95(supplement 3):S347–S360. doi: 10.1046/j.1360-0443.95.11s3.5.x. [DOI] [PubMed] [Google Scholar]
- 236.Weinfurt K. P., Bush P. J. Contradictory subject response in longitudinal research. Journal of Studies on Alcohol. 1996;57(3):273–282. doi: 10.15288/jsa.1996.57.273. [DOI] [PubMed] [Google Scholar]
- 237.Kerr W. C., Greenfield T. K. Distribution of alcohol consumption and expenditures and the impact of improved measurement on coverage of alcohol sales in the 2000 national alcohol survey. Alcoholism: Clinical and Experimental Research. 2007;31(10):1714–1722. doi: 10.1111/j.1530-0277.2007.00467.x. [DOI] [PubMed] [Google Scholar]
- 238.Nelson D. E., Naimi T. S., Brewer R. D., Roeber J. US state alcohol sales compared to survey data, 1993–2006. Addiction. 2010;105(9):1589–1596. doi: 10.1111/j.1360-0443.2010.03007.x. [DOI] [PubMed] [Google Scholar]
- 239.Rehm J. Measuring quantity, frequency, and volume of drinking. Alcoholism: Clinical and Experimental Research. 1998;22(2):4s–14s. doi: 10.1097/00000374-199802001-00002. [DOI] [PubMed] [Google Scholar]
- 240.Smith P. F., Remington P. L., Williamson D. F., Anda R. F. A comparison of alcohol sales data with survey data on self-reported alcohol use in 21 states. American Journal of Public Health. 1990;80(3):309–312. doi: 10.2105/AJPH.80.3.309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 241.Ramstedt M. How much alcohol do you buy? A comparison of self-reported alcohol purchases with actual sales. Addiction. 2010;105(4):649–654. doi: 10.1111/j.1360-0443.2009.02839.x. [DOI] [PubMed] [Google Scholar]
- 242.Sims M. 1-29-69. Toronto, Canada: Addiction Research Foundation; 1969. Comparison of sales figures of alcoholic beverages with types and amounts reported by Canadian facts company, limited, in a market survey. [Google Scholar]
- 243.Warner S. L. Randomized response: a survey technique for eliminating evasive answer bias. Journal of the American Statistical Association. 1965;60(309):63–66. doi: 10.1080/01621459.1965.10480775. [DOI] [PubMed] [Google Scholar]
- 244.Goodstadt M. S., Grusin V. The randomized response technique: a test on drug use. Journal of the American Statistical Association. 1975;70(352):814–818. [Google Scholar]
- 245.Weissman A. N., Steer R. A., Lipton D. S. Estimating illicit drug use through telephone interviews and the randomized response technique. Drug and Alcohol Dependence. 1986;18(3):225–233. doi: 10.1016/0376-8716(86)90054-2. [DOI] [PubMed] [Google Scholar]
- 246.McAuliffe W. E., Breer P., Ahmadifar N. W., Spino C. Assessment of drug abuser treatment needs in Rhode Island. American Journal of Public Health. 1991;81(3):365–371. doi: 10.2105/AJPH.81.3.365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 247.Campanelli P. C., Dielman T. E., Shope J. T. Validity of adolescents' self-reports of alcohol use and misuse using a bogus pipeline procedure. Adolescence. 1987;22(85):7–22. [PubMed] [Google Scholar]
- 248.Murray D. M., Perry C. L. The measurement of substance use among adolescents: when is the “bogus pipeline” method needed? Addictive Behaviors. 1987;12(3):225–233. doi: 10.1016/0306-4603(87)90032-3. [DOI] [PubMed] [Google Scholar]
- 249.Werch C. E., Gorman D. R., Marty P. J., Forbess J., Brown B. Effects of the bogus-pipeline on enhancing validity of self-reported adolescent drug use measures. The Journal of School Health. 1987;57(6):232–236. doi: 10.1111/j.1746-1561.1987.tb07839.x. [DOI] [PubMed] [Google Scholar]
- 250.Aguinis H., Pierce C. A., Quigley B. M. Enhancing the validity of self-reported alcohol and marijuana consumption using a bogus pipeline procedure: a meta-analytic review. Basic and Applied Social Psychology. 1995;16(4):515–527. [Google Scholar]
- 251.Tourangeau T., Smith T. W., Rasinski K. Motivation to report sensitive behaviors in surveys: evidence from a bogus pipeline experiment. Journal of Applied Social Psychology. 1997;27:209–222. [Google Scholar]
- 252.Lowe J. B., Windsor R. A., Adams B., Morris J., Reese Y. Use of a bogus pipeline method to increase accuracy of self-reported alcohol consumption among pregnant women. Journal of Studies on Alcohol. 1986;47(2):173–175. doi: 10.15288/jsa.1986.47.173. [DOI] [PubMed] [Google Scholar]
- 253.Johnson T., Fendrich M. Modeling sources of self-report bias in a survey of drug use epidemiology. Annals of Epidemiology. 2005;15(5):381–389. doi: 10.1016/j.annepidem.2004.09.004. [DOI] [PubMed] [Google Scholar]
- 254.Johnson T. P., Parker V., Clements C. Detection and prevention of data falsification in survey research. Survey Research. 2001;32:1–2. [Google Scholar]
- 255.Turner C. F., Gribble J. N., Al-Tayyib A. A., Chromy J. R. Technical Papers on Health and Behavior Measurement, No. 53. Washington, DC, USA: Research Triangle Institute; 2002. Falsification in epidemiologic surveys: detection and remediation [Prepublication Draft] http://qcpages.qc.cuny.edu/~cturner/TechPDFs/53_Falsify.pdf. [Google Scholar]
- 256.Grucza R. A., Abbacchi A. M., Przybeck T. R., Gfroerer J. C. Discrepancies in estimates of prevalence and correlates of substance use and disorders between two national surveys. Addiction. 2007;102(4):623–629. doi: 10.1111/j.1360-0443.2007.01745.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 257.Hughes A., Chrom J., Giacoletti K., Odom D. Impact of interviewer experience on respondent reports of substance use. In: Gfroerer J., Eyerman J., Chromy J., editors. Redesigning an Ongoing National Household Survey: Methodological Issues. Rockville, Md, USA: Substance Abuse and Mental Health Services Administration, Office of Applied Studies; 2002. pp. 161–184. (DHHS Publicaion No. SMA 03-3768). [Google Scholar]
- 258.Chromy J. R., Eyerman J., Odom D., McNeeley A. E. Association between interviewer experience and substance use prevalence rates in NSDUH. In: Kennet J., Gfroerer J., editors. Evaluating and Improving Methods Used in the National Survey of Drug Use and Health. Substance Abuse and Mental Health Services Administration, Office of Applied Studies; 2005. pp. 59–87. (DHHS Publication No. SMA 05-4044, Methodology Series M-5). [Google Scholar]
- 259.Johnson T. P., Fendrich M., Shaligram C., Garcy A., Gillespie S. An evaluation of the effects of interviewer characteristics in an RDD telephone survey of drug use. Journal of Drug Issues. 2000;30(1):77–102. [Google Scholar]
- 260.Johnson T., O'Rourke D., Chavez N., et al. Social cognition and responses to survey questions among culturally diverse populations. In: Lyberg L., Biemer P., Collins M., et al., editors. Survey Measurement and Process Quality. New York, NY, USA: John Wiley & Sons; 1997. pp. 87–113. [Google Scholar]
- 261.Mulford H. A., Miller D. E. Drinking in Iowa, I. Sociocultural distribution of drinkers. Quarterly Journal of Studies on Alcohol. 1959;20:704–726. [PubMed] [Google Scholar]
- 262.Johnson T. P., Parsons J. A. Interviewer effects on self-reported substance use among homeless persons. Addictive Behaviors. 1994;19(1):83–93. doi: 10.1016/0306-4603(94)90054-X. [DOI] [PubMed] [Google Scholar]
- 263.Darrow W. W., Jaffe H. W., Thomas P. A., et al. Sex of interviewer, place of interview, and responses of homosexual men to sensitive questions. Archives of Sexual Behavior. 1986;15(1):79–88. doi: 10.1007/BF01542306. [DOI] [PubMed] [Google Scholar]
- 264.Edwards S. L., Slattery M. L., Ma K. N. Measurement errors stemming from nonrespondents present at in-person interviews. Annals of Epidemiology. 1998;8(4):272–277. doi: 10.1016/S1047-2797(97)00230-5. [DOI] [PubMed] [Google Scholar]
- 265.Aquilino W. S. Privacy effects on self-reported drug use: interactions with survey mode and respondent characteristics. In: Harrison L., Hughes A., editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, Md, USA: National Institute on Drug Abuse; 1997. pp. 383–415. (NIDA Research Monograph 167). [PubMed] [Google Scholar]
- 266.Aquilino W. S., Wright D. L., Supple A. J. Response effects due to bystander presence in CASI and paper-and-pencil surveys of drug use and alcohol use. Substance Use & Misuse. 2000;35(6–8):845–867. doi: 10.3109/10826080009148424. [DOI] [PubMed] [Google Scholar]
- 267.Gfroerer J. Underreporting of drug use by youths resulting from lack of privacy in household interviews. In: Rouse B., Kozel N., Richards L., editors. Self-Report Methods of Estimating Drug Use: Meeting Current Challenges to Validity. Washington, DC, USA: National Institute on Drug Abuse; 1985. pp. 22–30. [Google Scholar]
- 268.Schutz C. G., Chilcoat H. D. Breach of privacy in surveys on adolescent drug use: a methodological inquiry. International Journal of Methods in Psychiatric Research. 1994;4:183–188. [Google Scholar]
- 269.Gfroerer J., Wright D., Kopstein A. Prevalence of youth substance use: the impact of methodological differences between two national surveys. Drug and Alcohol Dependence. 1997;47(1):19–30. doi: 10.1016/S0376-8716(97)00063-X. [DOI] [PubMed] [Google Scholar]
- 270.Kann L., Brener N. D., Warren C. W., Collins J. L., Giovino G. A. An assessment of the effect of data collection setting on the prevalence of health risk behaviors among adolescents. Journal of Adolescent Health. 2002;31(4):327–335. doi: 10.1016/S1054-139X(02)00343-9. [DOI] [PubMed] [Google Scholar]
- 271.Rootman I., Smart R. G. A comparison of alcohol, tobacco and drug use as determined from household and school surveys. Drug and Alcohol Dependence. 1985;16(2):89–94. doi: 10.1016/0376-8716(85)90108-5. [DOI] [PubMed] [Google Scholar]
- 272.Needle R., McCubbin H., Lorence J., Hochhauser M. Reliability and validity of adolescent self-reported drug use in a family-based study: a methodological report. International Journal of the Addictions. 1983;18(7):901–912. doi: 10.3109/10826088309033058. [DOI] [PubMed] [Google Scholar]
- 273.Zanes A., Matsoukas E. Different settings, different results? A comparison of school and home responses. Public Opinion Quarterly. 1979;43(4):550–557. doi: 10.1086/268553. [DOI] [Google Scholar]
- 274.Fendrich M., Johnson T. P. Examining prevalence differences in three national surveys of youth: impact of consent procedures, mode, and editing rules. Journal of Drug Issues. 2001;31(3):615–642. [Google Scholar]