Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 1.
Published in final edited form as: Addict Res Theory. 2019 Aug 20;28(4):321–327. doi: 10.1080/16066359.2019.1653860

Underreporting of drug use on a survey of electronic dance music party attendees

Joseph J Palamar a, Austin Le a,b
PMCID: PMC7643632  NIHMSID: NIHMS1047111  PMID: 33162873

Abstract

Objectives:

Skip-logic is commonly used on electronic surveys in which programs provide follow-up questions to affirmative responses and skip to the next topic in response to non-affirmative responses. While skip-logic helps produce data without contradictory responses, erroneous non-affirmative reports can lead to loss of accurate information. We examined the extent to which type-in drug use responses contradict unreported use in a survey of a high-risk population—electronic dance music (EDM) party attendees.

Design:

We surveyed 1029 EDM party-attending adults (ages 18–40) using time-spacing sampling in 2018. We examined the extent to which reporting of recent drug use via type-in responses occurred after past-year use of the same drugs were unreported earlier on the same survey. Changes in prevalence of use and predictors of providing discordant responses were examined.

Results:

3.6% of participants typed in names of drugs they had used that they did not report using earlier on the survey. Changes in prevalence were not significant when correcting contradictory responses, but prevalence of past-year cocaine use increased from 23.3% to 24.3%. Those with a college degree were at lower odds for providing a discordant response (aOR = 0.13, p = .019). Females (aOR = 2.82, p = .022), those earning ≥$1000 per week (aOR = 11.03, p = .011), and those identifying as gay/lesbian (aOR = 5.20, p = .032) or bisexual or other sexuality (aOR = 15.12, p < .001) were at higher odds of providing a discordant response.

Conclusions:

Electronic surveys that query drug use can benefit from follow-up (e.g. open-ended) questions not dependent on previous responses, as they may elicit affirmative responses underreported earlier in the survey.

Keywords: Epidemiology, survey methods, data collection, drug use, targeted population surveys

Introduction

Self-report via electronic surveys is now the most common means of assessing the extent and nature of drug use in epidemiological research. Compared to other methods such as testing for biologic markers, electronic surveys tend to be more practical, cost-efficient, and allow for the collation of more varied and detailed information (Rosay et al. 2007; Safdar et al. 2016; Palamar, Le, et al. 2019). A crucial aspect of producing accurate estimates of drug use is reliability of the data, particularly as it pertains to consistency of self-reported responses. For example, if a respondent initially does not report lifetime use of a drug (veracity assumed), then that respondent should again respond non-affirmatively when subsequently queried about use of the same drug at the end of the survey or on another survey. When inconsistent responses occur, reliability is diminished because of increased variability, and statistical estimates can be biased (Napper et al. 2010; King et al. 2018). Additional research that allows us to better understand the nature of inconsistent reporting is important in improving the reliability of survey-based designs.

In addition to their practicality, electronic surveys permit the use of skip-logic. In skip-logic methodology, an individual’s response to a question determines whether relevant follow-up questions on that topic will be asked. Aside from reducing participant burden stemming from being asked potentially irrelevant questions, this method is also advantageous in allowing researchers to conveniently query a wide variety of phenomena (Swanson et al. 2014), and making data management and analysis easier since skip-logic prevents inconsistent answers (e.g. reporting no lifetime ecstasy use, but then later reporting past-year ecstasy use).

However, skip-logic has limitations. Several studies examining skip-logic methodology have noted that measurement error can occur. For example, on one survey, nearly half (44.3%) of self-reported aborted suicide attempts would not have been queried, erroneously, owing to respondents answering negatively to gate questions about psychiatric disorders such as depression (Barber et al. 2001). Another study found that prevalence of self-reported past-year cocaine use based on data from the National Household Survey on Drug Abuse (NHSDA) increased when participants were provided multiple chances to report use (Lessler et al. 2000). These types of measurement errors often lead to underreporting whereby use of a drug is (unintentionally) unreported and there is not a subsequent opportunity to report use (Karam et al. 2014).

Other studies have recently investigated conflicting responses within the same surveys, though with a focus on discordant responses between use of a drug class and specific drugs within a drug class. For example, a recent study on the impact of gate questions found that nearly one in ten (9.3%) participants reporting no “bath salt” use via gate question later reported use of a “bath salt” such as mephedrone, methedrone, or methylone, despite these drugs having been listed as examples of “bath salts” in the gate question (Palamar, Acosta, Calderon, et al. 2017). Other studies have investigated contradictory responses on Monitoring the Future (MTF), an annual survey of a nationally representative sample of high school seniors in the US that, until recently, was conducted using pencil and paper. One study found that among those not reporting past-year nonmedical opioid use (with Vicodin and OxyContin listed as examples of opioids), 37.1% and 28.2% later reported past-year nonmedical Vicodin use and/or nonmedical OxyContin use, respectively, on the same survey (Palamar et al. 2016). A similar analysis of contradictory responses in MTF found that among students not reporting past-year nonmedical amphetamine use (with Adderall having been provided as an example of an amphetamine), 28.7% later reported past-year nonmedical use of Adderall on the same survey (Palamar and Le 2017).

In an effort to further examine the extent to which not reporting known drug use may contribute to measurement error on electronic surveys, this paper seeks to investigate the extent to which type-in drug use responses from a high-risk population—electronic dance music (EDM) party attendees (Kurtz et al. 2013; Palamar, Griffin-Tomas, et al. 2015; Hughes et al. 2017; Palamar, Acosta, Ompad, et al. 2017)—contradict unreported use within the same survey.

Methods

Procedure and participants

Participants were surveyed throughout the summer of 2018 using time-space sampling (MacKellar et al. 2007). Each week, parties (primarily at nightclubs) were randomly selected to survey attendees. A list of upcoming EDM parties in NYC (located primarily in Brooklyn and Manhattan) was created each week. The list was based on websites that sell EDM party tickets, EDM party listings on social media (e.g. Facebook), and recommendations from key informants. We considered parties from ticket websites eligible for random selection if ≥15 tickets were purchased for the party by mid-week. Parties were randomly selected each week using R software (R Core Team 2013). Recruitment typically occurred on one to two nights per week, on Thursday through Sunday. Time slots could not be randomly selected as most EDM parties in NYC end at 4am (with a few ending at 5am or 6am). Recruitment was typically conducted between 11:30pm and 1:30am to reach attendees about to enter parties. While most participants (70.4%, n = 724) were surveyed outside of nightclubs, participants were also surveyed outside of two large daytime festivals (29.6%, n = 305), which were not selected via random selection. All parties and festivals were limited to those that solely played EDM music (e.g. festivals that played other music in addition to EDM were excluded).

Passersby were eligible if they were between 18–40 years old, and about to enter the randomly selected party or festival. Recruiters approached passersby (who were alone or in groups), and if eligible, were asked if they would be willing to take a survey about drug use. Since inebriation was a concern (Aldridge and Charles 2008) recruiters tried to ensure that individuals were not visibly intoxicated. Specifically, they ensured that those approached before entering parties did not exhibit slurred speech or display impaired attention or gait. After informed consent was provided, surveys were taken on tablets. Participants were compensated $10 USD for completing the survey. Surveys were administered outside of 22 randomly selected parties plus the two festivals and the overall survey response rate was 74%.

Measures

Participants were asked about their age, sex, race/ethnicity, education, weekly income, and sexual orientation. They were also asked about their frequency of past-year nightclub, festival and/or other EDM party attendance. Participants were then asked about past-year use of various drugs. This investigation focused on use of 11 drugs (i.e. marijuana, ecstasy/MDMA/Molly, powder cocaine, LSD, shrooms, amphetamine [e.g. Adderall], ketamine, synthetic cannabinoids, GHB, methamphetamine, NBOMe) and 6 drug classes. For each drug class, lists of drugs were provided together on the same survey page and affirmative use of any drug in the class was coded as use of the drug class. The classes were prescription opioids (e.g. Vicodin), benzodiazepines (e.g. Xanax), synthetic cathinones (“bath salts”; e.g. methylone), tryptamines (e.g. DMT), 2C series (e.g. 2 C-B), and other new psychedelics (e.g. AL-LAD). With regard to use of amphetamine, opioids, and benzodiazepines, we only queried nonmedical use, which was defined as using without a prescription or in a manner in which it was not prescribed to the user; for example, to get high. Thus, we only queried illegal use of each drug.

The last page asked participants if they had used any drugs in the past day. Those checking ‘yes’ were asked to type in the name(s) of the drug(s) they had used. The survey also asked those reporting past-year use of each drug whether they had experienced an adverse outcome related to use of that drug in the past year (Palamar, Acosta, et al. 2019). Those answering affirmatively were asked to type in names of any other drugs they co-used before the adverse outcome in the past year. Of the drugs that were typed in, we determined those which were not reported elsewhere on the survey. We created indicator variables for each drug and further coded variables adding in type-in responses (for past-day use and for drugs co-used with other drugs before the experience of an adverse effect) for drugs not previously reported. We also coded a variable indicating whether any discordant response occurred—with discord being defined as typing in the name of a recently used drug after answering non-affirmatively when queried about use of that drug earlier in the survey.

Probability weights

Participant selection probabilities were computed based on self-reported frequency of party attendance. Weights included the number of party attendees (tracked via a clicker) who passed a predetermined recruitment line near the party entrance (MacKellar et al. 2007). Weights were inversely proportional to both the frequency of attendance component and to the party-level response rate and number-attending components. Weight components were combined by multiplication and normalized. This up-weighting of those believed to have a lower probability of selection (because they were less likely to go out and be surveyed) and down-weighting of those believed to have a higher probability of selection (because they were more likely to go out and be surveyed) is often used in other studies with venue-based sampling (MacKellar et al. 2007; Jenness et al. 2011).

Analyses

We first examined descriptive statistics of the sample. We then estimated the prevalence of use of each drug and the “adjusted” prevalence of each drug, accounting for additional type-in responses. We also subtracted prevalence from adjusted prevalence to compare estimates. Comparison of estimates was conducted using McNemar tests. Finally, we compared each covariate according to whether a discordant type-in response was provided, and we then fit all covariates into a multivariable logistic regression which produced adjusted odds ratios (aORs) for each covariate. Weights were utilized in all analyses and Taylor series estimation was used to obtain accurate standard errors (Heeringa et al. 2010). Data were analyzed using Stata 13 SE (StataCorp 2013).

Results

Sample characteristics of those who were administered the street-intercept survey are presented in Table 1. The majority of the sample were male (61.3%), heterosexual (81.5%), and had a college degree or higher (58.6%). Overall, 3.6% of participants typed in the name of a drug they had used that they did not report using earlier in the survey. Of those providing a discordant response, 84.4% provided one discordant response and 15.6% provided two discordant responses. As shown in Table 2, prevalence estimates were not drastically altered when adding in discordant type-in responses, although prevalence of powder cocaine use increased by 1% and prevalence of amphetamine use increased by 0.8%. No changes in prevalence were statistically significant.

Table 1.

Sample characteristics (n = 1019).

Weighted % n
Age
 18–24 49.5 504
 25–40 50.5 525
Sex
 Male 61.3 604
 Female 38.7 425
Race/Ethnicity
 White 43.9 494
 Black 10.0 84
 Hispanic 22.1 207
 Asian 15.4 159
 Other/Mixed 8.5 85
Education
 High school diploma or less 18.2 134
 Some college 23.2 245
 College degree or higher 58.6 650
Weekly income
 <$500 33.8 368
 $500–$999 37.5 377
 ≥$1000 28.7 284
Sexual orientation
 Heterosexual 81.5 786
 Gay/Lesbian 11.8 108
 Bisexual or other sexuality 6.6 135
Number of drugs reportedly used in past year
 0–1 drugs 49.0 421
 2–4 drugs 39.9 400
 ≥5 drugs 11.1 208

Number of drugs reportedly used in the past year is the uncorrected number of drugs used—not including updated responses considering type-in responses.

Table 2.

Prevalence of drug use before and after correction given type-in responses.

Past-year
prevalence,
Weighted % (n)
Adjusted
past-year
prevalence,
Weighted % (n)
Change in
prevalence after
adjustment,
Weighted % (n)
Any drug 77.9 (840) 78.0 (843) 0.1 (3)
Marijuana 64.8 (735) 65.1 (742) 0.3 (7)
Ecstasy/MDMA/Molly 30.5 (421) 31.1 (423) 0.6 (2)
Powder cocaine 23.3 (345) 24.3 (355) 1.0 (10)
LSD 18.0 (235) 18.3 (240) 0.3 (5)
Shrooms (psilocybin) 14.9 (220) 14.9 (221) 0.0 (1)
Amphetamine 9.8 (147) 10.6 (156) 0.8 (9)
Ketamine 9.4 (166) 9.4 (166) 0.0 (0)
Prescription opioids 9.4 (99) 9.7 (102) 0.3 (3)
“Bath Salts” 6.0 (39) 6.0 (39) 0.0 (0)
Benzodiazepines 5.1 (77) 5.6 (78) 0.5 (1)
Synthetic cannabinoids 3.8 (25) 3.8 (25) 0.0 (0)
GHB 2.3 (47) 2.3 (47) 0.0 (0)
Methamphetamine 2.1 (38) 2.1 (39) 0.0 (1)
Tryptamines 2.1 (33) 2.1 (33) 0.0 (0)
2C Series 1.1 (25) 1.1 (25) 0.0 (0)
NBOMe 1.0 (9) 1.0 (9) 0.0 (0)
Other new psychedelics 0.0 (0) 0.0 (0) 0.0 (0)

Use of amphetamine, benzodiazepines, and opioids refers to nonmedical use. Adjusted prevalence includes type-in responses of drugs in which use was unreported earlier on the survey.

Table 3 presents correlates of discordant type-in responses. Compared to males, females were at more than twice the odds (aOR = 2.82, p = .022) of providing a discordant response. Compared to those with a high school education or less, those with a college degree or more were at lower odds of providing a discordant response (aOR = 0.13, p = .019), and those earning ≥$1000 USD a week were at 11 times higher odds of providing such a response (aOR = 11.03, p = .011). Finally, compared to heterosexual participants, those identifying as a sexual minority were at higher odds of providing a discordant response. Specifically, compared to heterosexuals, gay/lesbian participants were at over five times the odds (aOR = 5.20, p = .032) and those identifying as bisexual or other sexuality were at over fifteen times the odds (aOR = 15.12, p < .001) of providing a discordant response, with nearly a fifth (19.0%) of those identifying as bisexual or other sexuality providing a discordant response.

Table 3.

Correlates of providing a discordant response.

No
discordant
response,
Weighted %
Any
discordant
response,
Weighted %
aOR 95% CI
Age
 18–24 97.8 2.2 1.00
 25–40 95.1 4.9 1.74 (0.44, 6.87)
Sex
 Male 97.2 2.8 1.00
 Female 95.2 4.8 2.82* (1.16, 6.84)
Race/Ethnicity
 White 96.2 3.8 1.00
 Black 97.9 2.1 0.59 (0.07, 5.26)
 Hispanic 96.6 3.4 0.38 (0.07, 1.97)
 Asian 99.4 0.6 0.25 (0.03, 2.25)
 Other/Mixed 90.2 9.8 3.73 (0.74, 18.86)
Education
 High school diploma or less 95.8 4.2 1.00
 Some College 95.4 4.7 0.33 (0.08, 1.44)
 College degree or higher 97.1 2.9 0.13* (0.02, 0.72)
Weekly income
 <$500 98.9 1.1 1.00
 $500–$999 96.1 3.9 4.97 (0.91, 27.22)
 ≥$1000 94.0 6.0 11.03* (1.74, 69.85)
Sexual orientation
 Heterosexual 98.4 1.6*** 1.00
 Gay/Lesbian 91.7 8.3 5.20* (1.16, 23.41)
 Bisexual or other sexuality 81.0 19.0 15.12*** (3.30, 69.41)
Number of drugs reportedly used in past year (Uncorrected)
 0–1 drugs 97.3 2.7 1.00
 2–4 drugs 95.6 4.4 1.91 (0.54, 6.71)
 ≥5 drugs 95.8 4.2 1.19 (0.23, 6.15)

The dependent variable indicates whether the participant provided a discordant response by typing in the name of a drug used that was not previously reported as used on the same survey (3.6%).

aOR: adjusted odds ratio; CI: confidence interval.

*

p < .05,

**

p < .01,

***

p < .001.

Discussion

It is known that inconsistent reporting is a common phenomenon in survey-based drug use studies that diminishes the reliability of the data and, by extension, accuracy of estimates. Nevertheless, electronic surveys have numerous feasibility advantages over other methods of assessing drug use and, consequently, will likely remain essential for epidemiologic research in drug use. Moreover, electronic surveys enable the use of skip-logic, which effectively limits the chances for respondents to be routed to questions where they may give answers inconsistent with their responses to previous questions. However, sometimes erroneous non-affirmative reports can lead to loss of information on surveys that utilize skip-logic as follow-up questions are not asked. In this study, we examined the extent to which type-in drug use responses on a survey contradict unreported use on a survey of a high-risk population—electronic dance music (EDM) party attendees.

Our findings suggest that approximately 3.6% of respondents provided discordant responses in which they typed in the names of drugs used (typically within the past 24 h) after having not reported past-year use of the same drug(s) earlier on the survey. This is lower than the incidence of inconsistent responses typically reported in the literature on survey-based studies, which range from 7% to 87%, though it should be noted that the majority of existing studies focus primarily on discord as to pertains to recanting use that was reported months or years earlier (Percy et al. 2005; Harris et al. 2008; Swanson et al. 2014; Palamar, Acosta, Calderon, et al. 2017; Palamar and Le 2017; Taylor et al. 2017; Jensen et al. 2018).

Although we did not find marked differences in prevalence of past-year use of most drugs after adjusting for discordant responses, use of powder cocaine and amphetamine did increase by 1% and 0.8%, respectively. Several studies have also reported relatively pronounced rates of inconsistent self-reporting for cocaine use (Fendrich and Mackesy-Amiti 2000; Percy et al. 2005; Harris et al. 2008; Ledgerwood et al. 2008), including a study examining conflicting responses on the NHSDA, which found that prevalence increased when participants were provided multiple chances to report past-year cocaine use (Lessler et al. 2000). Other studies have found that over one-fifth of amphetamine use is also discordantly reported (Harris et al. 2008). In particular, one study investigating use of nonmedical amphetamine among a nationally representative sample of high school seniors found that over a quarter of students reporting Adderall use also reported no amphetamine use in the same survey (Palamar and Le 2017). We believe that discordant responses involving nonmedical use of prescription drugs may be, in part, due to participant confusion. For example, some participants likely do not read that “nonmedical use” or “misuse” is being queried (as opposed to medical use), while others may be unaware of individual drugs that fall under these drug classes, even when examples of drugs under each drug class are provided for participants (Palamar 2018).

Our study findings further suggest that some groups are more likely to provide discordant responses than others. For example, compared to males, we found that females were at over 180% higher odds of typing in the name of a drug recently used that they did not report using earlier on the same survey. A previous study examining discordant survey responses among high school seniors in the US also found that females were at higher odds than males for reporting nonmedical Vicodin use after reporting no overall nonmedical opioid use (Palamar et al. 2016), and another study found that females were more likely than males to test positive for drug use after reporting they did not use (Fendrich et al. 2004). While females are less likely to report drug use than males in national samples (Johnston et al. 2019), we believe differences may be, in part, due to underreporting. However, we believe more research is needed to investigate potential mechanisms for this phenomenon.

Compared to those with a high school education or less, those with a college degree or higher in this study were at lower odds of typing in the name of a drug used after previously not reporting use on the survey. This corroborates recent findings showing that those with less education were more likely to provide discordant survey responses for marijuana, cocaine, ecstasy, amphetamine, and other ‘hard’ drugs (Jensen et al. 2018). Similarly, Percy et al. (2005) found that the odds of recanting previously reported use declined with increasing educational attainment, though their analysis was limited to use of marijuana and alcohol. Studies have shown that individuals with lower levels of education are more likely than more educated individuals to satisfice on surveys (Hamby and Taylor 2016), which is defined as respondents investing minimal attention towards survey questions and/or taking shortcuts to complete the survey sooner (Tourangeau 2000).

We also determined that those who reported earning more than $1000 USD per week were more likely to provide an inconsistent response. We expected that individuals with higher income would be less likely to provide an inconsistent response, but it may be that some individuals with higher income were less engaged (e.g. as the utility of $10 USD compensation may be less significant for respondents with higher income). In terms of sexual orientation, sexual minorities (i.e. gay/lesbian and bisexual) were significantly more likely to provide discordant responses than heterosexuals. To our knowledge, although previous studies have not yet specifically examined discordant survey responses as it pertains to sexual orientation, studies on self-reports versus biological testing for drug use have found that men who have sex with men (MSM) tend to provide more reliable responses about use of “harder” drugs than their heterosexual counterparts (Fendrich et al. 2008). Additionally, while individuals of sexual minority status—particularly those identifying as bisexual—have higher prevalence of drug use than heterosexual individuals (Medley et al. 2016; Duncan et al. 2019), bisexual individuals and those identifying as other sexuality were at over fifteen times the odds of providing a discordant response in this study, even after controlling for number of drugs used.

Overall, it appears that discordant responding within the same survey may be more of a problem for certain drugs such as cocaine and amphetamine and less of an issue for some other drugs. Assuming respondents did not overreport drug use via type-in responses, results suggest they incorrectly did not report use earlier in the survey when asked about use. It is unknown whether such false negatives are attributable to lack of attention or to satisficing, or perhaps even to mischievousness. In any case, results of this study lead us to question the extent to which brief closed-ended survey questions facilitate the attainment of reliable responses. While relying solely on type-in responses (without providing any closed-ended responses) leads to severe under-reporting of drug use (Kroutil et al. 2010; Palamar, Martins, et al. 2015; Palamar and Le 2019), it does appear that the nature of open-ended type-in responses may inherently prompt respondents to consider more thoroughly the question at hand, perhaps at times leading to more accurate responses. Or, asking more specifically about situational use may aid in recall more so than general questions about any use in the past year. However, asking about situational use may not be feasible (e.g. on short surveys). Ultimately, we believe that including one or more type-in responses in addition to closed-ended questions may help determine instances of underreporting.

When considering the existing literature, discordant responses—in particular, recanting of previously reported responses—appear more likely to occur between a baseline and follow-up survey after a non-negligible amount of time has passed. More research is needed to determine not only who is more likely to provide inaccurate or conflicting data, but also optimal means of handling conflicting responses. Contradictory responses are commonly deleted from national surveys (Brener et al. 2004; Miech et al. 2018) while other researchers delete and impute contradictory data (Center for Behavioral Health Statistics and Quality 2018). Deleting data may bias results as much as inaccurate data, so additional methods for handling conflicting responses would indeed be beneficial. Biological testing can help correct inaccurate survey responses, and perhaps post-hoc interviews can address conflicting responses and determine which response is in fact correct (Johnson and Bowman 2003). In this study, we added type-in responses to previous closed-ended responses as we do not believe type-in responses were instances of over-reporting. Although prevalence of drug use did not substantially change when considering discordant responses, such responses might lead to significant changes in other surveys or in other populations.

We do not believe instances of initial underreporting in this study are due to social desirability concerns as participants later typed in the names of drugs they had recently used. We believe discrepancies are more so due to differences in attention directed toward survey questions. More research is needed to further investigate reasons for and differences resulting from discordant reporting on drug surveys. Likewise, we determined various demographic risk factors for providing discordant responses and we believe future research is needed to determine the mechanisms for these characteristics increasing risk.

Limitations

We believe the main limitation of this study is that type-in responses were limited to drug use in the past day and to drugs associated with adverse effects in the past year. Such type-in responses limited to very recent use and to involvement in adverse effects did not likely allow us to detect less recent past-year use that was underreported earlier. In addition, type-in response options within themselves are commonly associated with underreporting (Kroutil et al. 2010; Palamar, Martins, et al. 2015; Palamar and Le 2019) so it is unlikely that these response boxes picked up all instances of earlier underreported used. Past-day use was not defined for participants so some might have interpreted this as use on the day of the survey and others might have interpreted this as use in the past 24 h. Limited recall of past-day drug use and of drugs used before adverse effects in the past year is also a limitation. It is also possible that some participants used a particular drug for the first time the day of the survey but did not consider that drug when answering about past-year use. Likewise, although the survey only focused on illegal/nonmedical drug use, it is possible some participants typed in names of prescription drugs only used for medical purposes. Although we did not include participants who demonstrated inebriation, it is possible that some participants were in fact inebriated and did not demonstrate visible signs. This investigation only focused on illegal drugs so we did not focus on alcohol use. Results may not be generalizable beyond the New York City EDM scene.

Conclusion

Overall, our findings suggest that prevalence estimates were not significantly affected by inconsistent responses, but we do believe drug use in general was underreported. Electronic surveys that query drug use can benefit from follow-up (e.g. open-ended) questions not dependent on previous responses, as they may elicit affirmative responses intentionally or unintentionally underreported earlier in the survey. However, type-in responses will by no means allow researchers to detect all underreporting. Electronic survey methods that better allow researchers to detect underreporting and/or discordant responses could greatly improve data in future studies.

Acknowledgments

Funding

This work was supported by National Institute on Drug Abuse. Research reported in this press release was supported by the National Institute on Drug Abuse of the National Institutes of Health under Award Numbers K01DA038800 and R01DA044207. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  1. Aldridge J, Charles V. 2008. Researching the intoxicated: informed consent implications for alcohol and drug research. Drug Alcohol Depend. 93(3):191–196. [DOI] [PubMed] [Google Scholar]
  2. Barber ME, Marzuk PM, Leon AC, Portera L. 2001. Gate questions in psychiatric interviewing: the case of suicide assessment. J Psychiatr Res. 35(1):67–69. [DOI] [PubMed] [Google Scholar]
  3. Brener ND, Kann L, Kinchen SA, Grunbaum JA, Whalen L, Eaton D, Hawkins J, Ross JG. 2004. Methodology of the youth risk behavior surveillance system. MMWR Recomm Rep 53(RR-12):1–13. [PubMed] [Google Scholar]
  4. Center for Behavioral Health Statistics and Quality. 2018. 2016 National Survey on Drug Use and Health: methodological resource book (section 10, editing and imputation report). Rockville (MD): Substance Abuse and Mental Health Services Administration. [Google Scholar]
  5. Duncan DT, Zweig S, Hambrick HR, Palamar JJ. 2019. Sexual Orientation Disparities in Prescription Opioid Misuse Among U.S. Adults. Am J Prev Med. 56(1):17–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Fendrich M, Johnson TP, Wislar JS, Hubbell A, Spiehler V. 2004. The utility of drug testing in epidemiological research: results from a general population survey. Addiction (Abingdon, England). 99(2):197–208. [DOI] [PubMed] [Google Scholar]
  7. Fendrich M, Mackesy-Amiti ME, Johnson TP. 2008. Validity of self-reported substance use in men who have sex with men: comparisons with a general population sample. Ann Epidemiol. 18(10):752–759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fendrich M, Mackesy-Amiti ME. 2000. Decreased drug reporting in a cross-sectional student drug use survey. J Subst Abuse. 11(2):161–172. [DOI] [PubMed] [Google Scholar]
  9. Hamby T, Taylor W. 2016. Survey satisficing inflates reliability and validity measures: an experimental comparison of college and Amazon Mechanical Turk samples. Educ Psychol Meas. 76(6):912–932. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Harris KM, Griffin BA, McCaffrey DF, Morral AR. 2008. Inconsistencies in self-reported drug use by adolescents in substance abuse treatment: implications for outcome and performance measurements. J Subst Abuse Treat. 34(3):347–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Heeringa SG, West BT, Berglund PA. 2010. Applied survey data analysis. London: Chapman and Hall: CRC Press. [Google Scholar]
  12. Hughes CE, Moxham-Hall V, Ritter A, Weatherburn D, MacCoun R. 2017. The deterrent effects of Australian street-level drug law enforcement on illicit drug offending at outdoor music festivals. Int J Drug Policy. 41:91–100. [DOI] [PubMed] [Google Scholar]
  13. Jenness SM, Neaigus A, Murrill CS, Gelpi-Acosta C, Wendel T, Hagan H. 2011. Recruitment-adjusted estimates of HIV prevalence and risk among men who have sex with men: effects of weighting venue-based sampling data. Public Health Rep. 126(5):635–642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Jensen HAR, Karjalainen K, Juel K, Ekholm O. 2018. Consistency in adults’ self-reported lifetime use of illicit drugs: a follow-up study over 13 years. J Stud Alcohol Drugs. 79(3):490–494. [PubMed] [Google Scholar]
  15. Johnson TP, Bowman PJ. 2003. Cross-cultural sources of measurement error in substance use surveys. Subst Use Misuse. 38(10):1447–1490. [DOI] [PubMed] [Google Scholar]
  16. Johnston LD, Miech RA, O’Malley PM, Bachman JG, Schulenberg JE, Patrick ME. 2019. Demographic subgroup trends among adolescents in the use of various licit and illicit drugs. Ann Arbor (MI): Institute for Social Research, University of Michigan; p. 1975–2018. [Google Scholar]
  17. Karam EG, Sampson N, Itani L, Andrade LH, Borges G, Chiu WT, Florescu S, Horiguchi I, Zarkov Z, Akiskal H. 2014. Under-reporting bipolar disorder in large-scale epidemiologic studies. J Affect Disord. 159:147–154. [DOI] [PubMed] [Google Scholar]
  18. King KM, Kim DS, McCabe CJ. 2018. Random responses inflate statistical estimates in heavily skewed addictions data. Drug Alcohol Depend. 183:102–110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Kroutil LA, Vorburger M, Aldworth J, Colliver JD. 2010. Estimated drug use based on direct questioning and open-ended questions: responses in the 2006 National Survey on Drug Use and Health. Int J Methods Psychiatr Res. 19(2):74–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Kurtz SP, Surratt HL, Buttram ME, Levi-Minzi MA, Chen M. 2013. Interview as intervention: the case of young adult multidrug users in the club scene. J Subst Abuse Treat. 44(3):301–308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Ledgerwood DM, Goldberger BA, Risk NK, Lewis CE, Price RK. 2008. Comparison between self-report and hair analysis of illicit drug use in a community sample of middle-aged men. Addict Behav. 33(9):1131–1139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Lessler JT, Caspar RA, Penne MA, Barker PR. 2000. Developing computer assisted interviewing (CAI) for the National Household Survey on Drug Abuse. J Drug Issues. 30(1):9–33. [Google Scholar]
  23. MacKellar DA, Gallagher KM, Finlayson T, Sanchez T, Lansky A, Sullivan PS. 2007. Surveillance of HIV risk and prevention behaviors of men who have sex with men-a national application of venue-based, time-space sampling. Public Health Rep. 122(1_suppl):39–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Medley G, Lipari RN, Bose J, Cribb DS, Kroutil LA, McHenry G. 2016. Sexual orientation and estimates of adult substance use and mental health: results from the 2015 National Survey on Drug Use and Health. Rockville (MD): Substance Abuse and Mental Health Services Administration. [Google Scholar]
  25. Miech RA, Johnston LD, O’Malley PM, Bachman JG, Schulenberg JE, Me P. 2018. Monitoring the Future National Survey results on drug use, 1975–2017: volume I, secondary school students. Ann Arbor (MI): Institute for Social Research, University of Michigan. [Google Scholar]
  26. Napper LE, Fisher DG, Johnson ME, Wood MM. 2010. The reliability and validity of drug users’ self reports of amphetamine use among primarily heroin and cocaine users. Addict Behav. 35(4):350–354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Palamar JJ, Acosta P, Calderon FF, Sherman S, Cleland CM. 2017. Assessing self-reported use of new psychoactive substances: the impact of gate questions. Am J Drug Alcohol Abuse. 43(5):609–617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Palamar JJ, Acosta P, Le A, Cleland CM, Nelson LS. 2019. Adverse drug-related effects among electronic dance music party attendees. Int J Drug Policy. 73:81–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Palamar JJ, Acosta P, Ompad DC, Cleland CM. 2017. Self-reported ecstasy/MDMA/“Molly” use in a sample of nightclub and dance festival attendees in New York City. Subst Use Misuse. 52(1):82–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Palamar JJ, Griffin-Tomas M, Ompad DC. 2015. Illicit drug use among rave attendees in a nationally representative sample of US high school seniors. Drug Alcohol Depend. 152:24–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Palamar JJ, Le A, Guarino H, Mateu-Gelabert P. 2019. A comparison of the utility of urine- and hair testing in detecting self-reported drug use among young adult opioid users. Drug Alcohol Depend. 200:161–167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Palamar JJ, Le A. 2017. Discordant reporting of nonmedical amphetamine use among Adderall-using high school seniors in the US. Drug Alcohol Depend. 181:208–212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Palamar JJ, Le A. 2019. Use of new and uncommon synthetic psychoactive drugs among a nationally representative sample in the United States, 2005–2017. Hum Psychopharmacol. 34(2):e2690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Palamar JJ, Martins SS, Su MK, Ompad DC. 2015. Self-reported use of novel psychoactive substances in a US nationally representative survey: prevalence, correlates, and a call for new survey methods to prevent underreporting. Drug Alcohol Depend. 156:112–119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Palamar JJ, Shearston JA, Cleland CM. 2016. Discordant reporting of nonmedical opioid use in a nationally representative sample of US high school seniors. Am J Drug Alcohol Abuse. 42(5):530–538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Palamar JJ. 2018. Barriers to accurately assessing prescription opioid misuse on surveys. Am J Drug Alcohol Abuse. 45(2):1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Percy A, McAlister S, Higgins K, McCrystal P, Thornton M. 2005. Response consistency in young adolescents’ drug use self-reports: a recanting rate analysis. Addiction. 100(2):189–196. [DOI] [PubMed] [Google Scholar]
  38. R Core Team (2013). R: a language and environment for statistical computing. Vienna (Austria): R Foundation for Satatistical Computing; http://www.R-project.org/. [Google Scholar]
  39. Rosay AB, Najaka SS, Herz D. 2007. Differences in the validity of self-reported drug use across five factors: gender, race, age, type of drug, and offense seriousness J Quant Criminol. 23(1):41–58. [Google Scholar]
  40. Safdar N, Abbo LM, Knobloch MJ, Seo SK. 2016. Research methods in healthcare epidemiology: survey and qualitative research. Infect Control Hosp Epidemiol. 37(11):1272–1277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. StataCorp (2013). Stata Statistical Software: Release 13. College Station (TX): StataCorp LP. [Google Scholar]
  42. Swanson SA, Brown TA, Crosby RD, Keel PK. 2014. What are we missing? The costs versus benefits of skip rule designs. Int J Methods Psychiatr Res. 23(4):474–485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Taylor M, Sullivan J, Ring SM, Macleod J, Hickman M. 2017. Assessment of rates of recanting and hair testing as a biological measure of drug use in a general population sample of young people. Addiction. 112(3):477–485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Tourangeau R 2000. The psychology of survey response. 1st ed. Cambridge University Press. [Google Scholar]

RESOURCES