Abstract
Background:
Intercept surveys are a relatively inexpensive method to rapidly collect data on drug use. However, querying use of dozens of drugs can be time-consuming. We determined whether using a rapid screener is efficacious in detecting which participants use drugs and should be offered a full survey which asks more extensively about use.
Methods:
We surveyed 103 adults (age 18–29) on streets of Manhattan, NY in 2019 to test the reliability of a screener which queried past-year use of six drugs. Those reporting any drug use on the screener (and a third of those not reporting drug use) were offered the full survey which queried use of 97 drugs. We compared self-reported use on the screener to the full survey.
Results:
Self-reported use of ecstasy, cocaine, and LSD had high test-retest reliability (Kappa = 0.90–1.00), and the screener had high sensitivity (1.00) and specificity (0.97–1.00) in detecting use of these drugs. Reliability for marijuana (Kappa = 0.62) and nonmedical opioid use (Kappa = 0.75) was lower. The screener had higher sensitivity (0.94) and lower specificity (0.64) in detecting marijuana use, and lower sensitivity (0.71) and higher specificity (0.98) in detecting nonmedical opioid use. Within the full survey, all participants reporting use of amphetamine (nonmedical use), shrooms, poppers, synthetic cannabinoids, synthetic cathinones, novel psychedelics, ketamine, or GHB reported use of at least one drug queried on the screener.
Conclusions:
Self-reported use of common drugs on a screener can reliably be used as an inclusion criterion for more extensive intercept surveys about drug use behavior.
Keywords: screening, intercept surveys, validation, marijuana
Introduction
Drug surveys are the leading method to estimate prevalence of drug use, and to identify at-risk groups, determine barriers to care, and to advance knowledge of health-related behavior. National household and school-based surveys are the main source for such data, but these require an enormous amount of time, effort, and monetary resources. Street intercept surveys, although not always based on designs as advanced as national surveys, are a method used to rapidly survey the general population or subpopulations of interest. Intercept surveys have multiple advantages. One advantage is that they are administered at the point-of-recruitment rather than the researcher relying on participants to be available at a future date. Another advantage—particularly when data collection is anonymous—is that intercept surveys can collect sensitive data (e.g. about drug use) from both general and hard-to-reach populations. However, like all surveys, street intercept surveys require adequate response rates and reliable data to produce less biased estimates of drug use. In this study, we tested a rapid screener to be administered before a full drug survey in effort to obtain data that may be missed upon a greater number of individuals refusing to take a longer survey.
Research suggests shorter surveys often yield higher response rates than longer surveys (Edwards et al., 2004; Edwards et al., 2009; Sahlqvist et al., 2011). Nonresponse can be detrimental to studies as it has often been found to be associated with underestimates of drug use (Ahacic et al., 2013; Christensen et al., 2015; Maclennan et al., 2012; McCabe & West, 2016; Meiklejohn et al., 2012). Further, maintaining attention of young people during survey administration can be challenging—especially during longer surveys. Lack of attention and motivation can manifest as ‘satisficing’, which is when participants resort to unthoughtful responses to finish sooner (Krosnick, 1991; Tourangeau et al., 2006). For example, some participants take shortcuts such as simply answering ‘no’ to most responses. As such, this is associated with under-reporting of behaviors. Shorter surveys can help increase response rates, prevent participant exhaustion, and prevent satisficing.
While it is not always possible to reduce the length of a survey (to increase response rates), utilizing a rapid screener before a survey can obtain useful information—not only regarding who responds to take the full survey, but also to obtain some data that would have been lost among those who refuse to take the full survey. For example, in 2019, the National Survey on Drug Use and Health (NSDUH), a national survey of noninstitutionalized individuals in the US, had a screening response rate of 71% and a 65% response rate for the full survey (Center for Behavioral Health Statistics & Quality, 2020a). Given the tendency for a drop in response rates between screening and the full study, it appears that not asking about any drug use on such a screener can lead to a loss of valuable data that can inform estimates of drug use prevalence. Further, some underrepresented or underserved populations such as the homeless are inevitably underrepresented in household surveys (Buschmann, 2019; Miller et al., 1997), so intercept surveys conducted on the street provide greater access to these groups and thus can make results more generalizable to the full population.
It is also not always feasible to ask about use of dozens of different drugs, especially on an intercept survey, but estimating use of less common drugs may indeed be a goal of the researcher. Asking participants about many drugs can be time-consuming, especially considering that this often yields low prevalence estimates. For example, drugs like ketamine and GHB are only used by <1% of the general US population, as is use of new psychoactive substances such as synthetic cathinones (“bath salts”) and novel psychedelics (Center for Behavioral Health Statistics & Quality, 2020b; Miech et al., 2020). This suggests that fewer than two participants out of 100 in the general population will report use of any of the potential dozens of drugs asked about. Therefore, some researchers and participants alike may not feel that asking about so many drugs is worth the time (or compensation). We believe that instead of asking all participants to answer questions about dozens of drugs—including many which they have never heard of—it might be more useful for some studies to limit these questions to those who report use of a more common drug such as marijuana. Various national studies have found that almost all participants who use other drugs also have used marijuana (Palamar et al., 2017; Palamar et al., 2017; Teter et al., 2020; Wu et al., 2006). As such, we developed a rapid screening tool to test whether asking about past-year use of six common drugs can serve as an adequate indicator for more extensive drug use.
While intercept surveys by their nature are typically not very long, we believe using a short screener before the survey can serve not only as an innovative method to increase response rates and to acquire some data from those who would have refused the full survey, but also to focus the core of the questions on participants who will likely provide more data on the topic of interest (e.g. drug use). Conducting a rapid (2-minute) screener and administering the full survey to those reporting drug use can thus help save ‘low-risk’ participants time, and this two-stage approach can save researchers the cost of fully compensating participants to take a longer survey about drug use when it is unlikely that they use any drugs. In this pilot study, we tested 1) whether not reporting use of a common drug on a screener serves as an indicator for not reporting more extensive drug use on a full survey, and 2) whether screening about drug use reliably detects drug use compared to a full survey.
Methods
We conducted our anonymous screener and street intercept survey in May 2019. We selected areas throughout Greenwich Village in Manhattan, NY to recruit participants. To be eligible, individuals had to be age 18–29 and speak English. We aimed to approach every third person, but similar to other intercept survey studies, this was not always exact due to erratic foot traffic and crowds of people (Bryant-Stephens et al., 2011). Trained recruiters approached individuals (who were alone or in groups) who were walking by, standing around, sitting, or waiting for public transportation (Becker et al., 1998; Miller et al., 1997). They avoided approaching individuals who were walking hurriedly, wearing headphones, or talking on their mobile devices (Davis & Evans, 2018).
Individuals approached were asked their age and if they would be willing to take a 2-minute drug survey for which they would be compensated with a $2 bill upon completion. This compensation was chosen because $2 bills are widely considered novelty items. In fact, a previous study found that participants were more likely to respond when compensation was a $2 bill compared to when a $5 check was offered (Doody et al., 2003). It was also explained that some participants would be offered the longer survey right after the screener, and that those completing the full survey would receive $10 compensation instead of $2. The overall response rate was 48%. The response rate was higher (75%) in or near park areas where fewer people appeared to be in a hurry, and 32% in high-foot traffic sidewalk areas (e.g. outside of subway exits). All study methods were approved by the New York University Langone Medical Center institutional review board.
Measures
After providing informed consent on the first page of the screener on an electronic tablet, participants were asked about demographic characteristics including age, sex, and educational attainment. The next page asked whether the participant had engaged in past-year use of 1) marijuana, 2) ecstasy or Molly, 3) cocaine, 4) LSD, 5) heroin, and 6) prescription opioids (nonmedically). Nonmedical opioid use was defined as use when the drug was not prescribed to the participant or when the drug was used to get high. Examples of opioids were listed as follows: Vicodin, OxyContin, Percocet, codeine (including Tylenol 3), morphine, Roxicodone, Dilaudid, tramadol, methadone, fentanyl, and “Lean” (“Sizzurp” or “Purple Drank” containing codeine). Lean containing codeine was added as this concoction is somewhat commonly used in party scenes (Palamar et al., 2018). Answer options for each item were ‘yes’ and ‘no’. Those answering affirmatively to use of any drug were then taken to a page offering the full version of the survey (which was programmed to continue from the screener). Those who agreed to take the full survey clicked that they agreed and proceeded to take it. Participants who were eligible for the full survey but did not agree to take it clicked that they were not interested and then the screener ended. Those not reporting any drug use on the screener were also taken to a page saying they had completed the screener, although a third of these participants were randomly assigned via the survey program to be offered the full survey. This was done to test whether any of these participants reported drug use after not reporting use on the screener.
Those who took the full survey were asked about past-year use of 97 different drugs including the drugs they were asked about on the screener—marijuana, cocaine, ecstasy (MDMA, Molly), LSD, heroin, and nonmedical prescription opioid use. Nonmedical use of prescription opioids, however, was queried on the full survey by asking about nine separate opiates or opioid formulations: Vicodin (or other hydrocodone), OxyContin, Other oxycodone (such as Percocet, Roxicodone), codeine (including Tylenol 3), tramadol, morphine, Dilaudid (or other hydromorphone), methadone, and Lean (containing codeine). Reporting use of any was recoded as report of any nonmedical opioid use. A previous study using this full version of the survey along with a longitudinal component found test-retest reliability was strong or almost perfect for all 17 drugs examined (Kappa range: 0.88–1.00) (Palamar et al., 2019). The survey has also been found to have particularly high specificity (0.89–1.00) for 11 out of 14 drugs and drug classes examined using hair test results as the gold standard (Palamar et al., 2021).
Analysis
We first calculated the prevalence of self-reported past-year use of the six drugs among those who completed the screener (n = 103). We then calculated and compared prevalence based on those who took the screener and who also completed the full survey (n = 68). We used McNemar tests to determine whether there were significant shifts in reported prevalence from the screener to the full survey and calculated test-retest reliability via the Kappa statistic (Cohen, 1960). We then calculated the sensitivity and specificity of responses to determine the extent which the screener correctly classified self-reported use on the full survey. Finally, we determined whether anyone not reporting drug use on the screener later reported any use on the full survey, and whether participants reporting use of other drugs on the full survey reported use of at least one of the drugs queried on the screener. Analyses were conducted using Stata 13.
Results
Participant characteristics are presented in Table 1. On average, participants were age 21.0 (SD = 2.7, range: 18–29) and the majority (55.3%) identified as female. The plurality identified as white (36.9%) and had a high school diploma or GED as their highest educational attainment (35.9%).
Table 1.
Sample Characteristics of those screened for the street intercept survey (n = 103).
| Characteristic | % (n) |
|---|---|
| Age, year | M = 21.0, SD = 2.7 |
| Sex | |
| Male | 44.7 (46) |
| Female | 55.3 (57) |
| Race/Ethnicity | |
| White, Non-Hispanic | 36.9 (38) |
| Black, Non-Hispanic | 15.5 (16) |
| Hispanic | 14.6 (15) |
| Asian | 24.3 (25) |
| Other/Mixed | 8.7 (9) |
| Education | |
| Less than a high school diploma | 2.1 (3) |
| High school diploma or GED | 35.9 (37) |
| Some college | 31.1 (32) |
| College degree | 29.1 (30) |
| Graduate school | 1.0 (1) |
| Residence | |
| Resides within a few blocks away | 34.0 (35) |
| Resides in Manhattan, but farther away | 19.4 (20) |
| Resides in an outer borough | 31.1 (32) |
| Resides outside of New York City | 15.5 (16) |
Note. M = mean, SD = standard deviation.
Table 2 compares prevalence of reported drug use on the screener and on the full survey. Three quarters (74.8%, n = 77) of participants reported use of at least one drug on the screener and were offered the full survey, and of these, 68 (88.3%) participated in the full survey. The screener often elicited a higher number of participants reporting use. For example, 74 participants reported marijuana use on the screener while only 54 reported use on the full survey—mainly due to nonparticipation in the full survey, but also due to potential underreporting later on. While everyone reporting ecstasy use on the screener also reported use on the full survey, not all participants reporting marijuana, LSD, or heroin use on the screener reported use on the full survey, and an additional two participants reported nonmedical opioid use on the full survey compared to the screener. This led to minor shifts in prevalence across drugs although none were significant as per McNemar tests. Self-reported use of ecstasy, cocaine, and LSD had high test-retest reliability (Kappa = 0.90–1.00), and the screener had high sensitivity (1.00) and specificity (0.97–1.00) in detecting use of these drugs. Reliability for marijuana (Kappa = 0.62) and nonmedical opioid use (Kappa = 0.75) was lower. The screener had higher sensitivity (0.94) and lower specificity (0.64) in detecting marijuana use, and lower sensitivity (0.71) and higher specificity (0.98) in detecting nonmedical opioid use.
Table 2.
Comparison of prevalence of self-reported drug use between the screener and full survey.
| Full Screened Sample (n = 103) | Comparison of Those Screened with Those Who Also Took Full Survey (n = 68) | ||||||
|---|---|---|---|---|---|---|---|
| Reported Use on Screener, % (n) | Reported Use on Screener, % (n) | Reported Use on Survey, % (n) | Kappa | McNemar p | Sensitivity | Specificity | |
| Marijuana | 71.8 (74) | 82.4 (56) | 79.4 (54) | 0.62 | 0.73 | 0.94 | 0.64 |
| Cocaine | 16.5 (17) | 19.1 (13) | 16.2 (11) | 0.90 | 0.50 | 1.00 | 0.97 |
| Ecstasy | 9.7 (10) | 13.2 (9) | 13.2 (9) | 1.00 | 1.00 | 1.00 | 1.00 |
| Opioids | 6.8 (7) | 8.8 (6) | 10.3 (7) | 0.75 | 1.00 | 0.71 | 0.98 |
| LSD | 7.8 (8) | 10.3 (7) | 8.8 (6) | 0.92 | 1.00 | 1.00 | 0.98 |
| Heroin | 1.0 (1) | 1.5 (1) | 0.0 (0) | -- | -- | -- | 0.99 |
Note. Opioid use refers to nonmedical prescription opioid use. Ecstasy also refers to MDMA and Molly. Kappa, McNemar, and sensitivity could not be calculated for heroin because no participants reported use on the full survey.
Finally, when examining whether those reporting use of other drugs on the full survey reported use of drugs on the screener (Table 3), all participants reporting use of amphetamine (nonmedical use), shrooms, poppers, synthetic cannabinoids, synthetic cathinones (“bath salts”), novel psychedelics, ketamine, or gamma-hydroxybutyrate (GHB) reported use of at least one drug queried on the screener. Nonmedical benzodiazepine use, however, was reported by two participants who did not report any drug use on the screener.
Table 3.
Other reported past-year drug use according to whether drug use was reported on the screener.
| Among Those Reporting Any Drug Use on Screener (n = 59), % (n) | Among Those Reporting Marijuana Use on Screener (n = 56), % (n) | Among Those Not Reporting Any Drug Use on Screener (n = 9), % (n) | |
|---|---|---|---|
| Benzodiazepines (nonmedical) | 10.2 (6) | 7.1 (4) | 16.7 (2) |
| Amphetamine (nonmedical) | 13.6 (8) | 14.3 (8) | 0.0 (0) |
| Shrooms | 6.8 (4) | 7.1 (4) | 0.0 (0) |
| Poppers | 6.8 (4) | 7.1 (4) | 0.0 (0) |
| Synthetic Cannabinoids | 5.1 (3) | 5.4 (3) | 0.0 (0) |
| Synthetic Cathinones | 5.1 (3) | 5.4 (3) | 0.0 (0) |
| Novel Psychedelics | 5.1 (3) | 5.4 (3) | 0.0 (0) |
| Ketamine | 1.7 (1) | 1.8 (1) | 0.0 (0) |
| GHB | 1.7 (1) | 1.8 (1) | 0.0 (0) |
Note. A total of 68 participants completed the full survey—59 of whom reported drug use on the screener and 9 of whom did not report drug use on the screener. Past-year use of synthetic cathinones (“bath salts”), novel psychedelics, and nonmedical use of benzodiazepines was coded affirmatively based on the participant reporting use of any drug listed within that category.
Discussion
This pilot study tested the efficacy of adding a rapid screener to an intercept survey to determine which participants should be offered a full survey which asks more extensively about drug use. As expected, a higher number of participants reported use of drugs other than heroin on the screener. This was due, in part, to some participants reporting drug use and declining participation in the full survey. Thus, these individuals would have likely been non-responders if only the full survey was offered. Depending on the researcher’s aims, estimates for some common drugs based only on the screener may be more accurate compared to those who take the full survey, given the screener has higher response rates. As such, we believe screener results can be incorporated with full survey results to produce overall estimates of drug use (assuming nonuse among those not reporting any use on the screener). Although, it would need to be decided how to best handle contradictory affirmative responses among those taking both the screener and full survey (e.g. code any affirmative response as reporting use).
Despite minor (non-significant) shifting of prevalence and some discordant responses between modalities, test-retest reliability and sensitivity/specificity tended to be high for most drugs examined. We thus believe that while this piloted method is by no means perfect, it appears to be a reliable instrument that can be used to determine whether a participant needs to be asked about more in-depth drug use behavior. We are unable to deduce why some participants reported use of marijuana, cocaine, or LSD and then reported no use on the full survey. Underreporting of drug use on the full survey could have occurred if a participant learned early on that an affirmative response may lead to follow-up questions. However, all three drugs were queried near the beginning of the full survey, so we do not believe changed responses resulted from survey exhaustion. We also believe that overreporting on the screener was unlikely as this phenomenon is most common among adolescents (e.g. who are more likely to provide mischievous responses) (Norwood et al., 2016; Percy et al., 2005; Robinson-Cimpian, 2014). However, it is possible that some participants suspected that reporting use on the screener would result in them being offered the full survey which provided more compensation.
Two additional participants reported nonmedical opioid use on the full survey after not reporting use on the screener. This may be because nonmedical use of opioids or other prescription drugs can be a confusing topic for many participants as some struggle to differentiate between medical and nonmedical use (Palamar, 2018). Some participants also likely underreport use as they may not be aware that the drug they used was an opioid, and others likely overreport nonmedical use because they do not fully read the question that they are being asked only about nonmedical use. In addition, while we only queried nonmedical opioid use on the screener using a single item, we asked about use of nine separate opioids on the full survey. Separate items could have led some participants to read the questions more closely or to recognize that a drug they used was in fact an opioid. In addition, asking more questions may simply increase the likelihood of an affirmative response.
All participants reporting use of other illegal drugs on the full survey reported any drug use on the screener which suggests such a screener may indeed be a useful tool to determine who should take a more in-depth survey about drug effects if the research is aiming to conserve resources. Nonmedical benzodiazepine use was the only exception with two participants reporting use after not reporting any drug use on the screener. If such a tool is used in the future by researchers, adding nonmedical benzodiazepine use to the screener may further increase efficacy. All participants reporting drug use on the full survey reported marijuana use on the screener which further suggests the usefulness of this question as an inclusion criterion. We believe it is important to note that using marijuana as an inclusion criterion is by no means meant to suggest that marijuana use leads to other drug use, but rather, past research has shown that the vast majority of people who use other drugs also use marijuana (Palamar et al., 2017; Palamar et al., 2017; Teter et al., 2020; Wu et al., 2006). While all participants reporting other drug use on our full survey reported marijuana use on the screener, we do recommend asking about multiple common drugs on the screener because not all people who use other drugs use marijuana. Including multiple drugs likely increases the probability of detecting use of other drugs not queried on the screener. A possible alternative to asking about specific drugs is to simply ask participants if they have used any illegal drugs in the past year, but this leads to loss of data on specific common drugs of interest among those who refuse to take the full survey.
It should also be noted that drugs included on such a screener may depend on the population of interest. For example, asking about use of drugs such as ketamine and GHB would likely be more efficacious on a screener used for nightclub-attending populations compared to other populations. Finally, in populations in which drug use is highly prevalent, adding such a screener may actually take participants more time, on average, to complete the full battery of questions. A quarter of those screened in our study were determined ineligible for the full survey via screening, and some researchers may not consider screening out 25% of participants to be worth the effort. We recommend using a screener when a sizable portion of individuals in a population are expected to have not recently used drugs.
Limitations
This study was based on a relatively small sample of young adults, and the study was conducted in NYC where marijuana has been decriminalized, so results may be less generalizable to areas with more conservative drug policies. Overall response rates were less than adequate, particularly in fast-paced foot traffic areas. Only a third of those not reporting drug use on the screener were offered the full survey. We did not provide a question on the full survey asking participants if they believe their responses match their responses on the screener, and if not, why not. Adding such a question could likely help deduce why discordant responses exist. Finally, we are not able to determine which participants under- or overreported drug use when responses were contradictory between survey modes.
Conclusions
This rapid screener appears to reliably assess past-year drug use compared to a street intercept survey. Results suggest that all participants reporting illegal drug use on our full survey reported use of marijuana or select other common drugs on the screener. This indicates that longer surveys with an aim to both estimate prevalence and focus more extensively on drug use can utilize such a screener to both determine eligibility for a longer survey and to more accurately estimate use of drugs.
Funding
Research reported in this publication was supported by the National Institute on Drug Abuse of the National Institutes of Health under Award Numbers R01 DA044207 (P I: Palamar) and P30 DA011041 (P I: Hagan). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Disclosure statement
Dr. Palamar has consulted for Alkermes. The authors have no other potential conflicts to declare.
References
- Ahacic K, Kareholt I, Helgason AR, & Allebeck P (2013). Non-response bias and hazardous alcohol use in relation to previous alcohol-related hospitalization: Comparing survey responses with population data. Substance Abuse Treatment, Prevention, and Policy, 8(1), 10. 10.1186/1747-597X-8-10 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Becker DM, Young DR, Yanek L, Voorhees CC, Levine D, & Janey N (1998). Smoking restriction policy attitudes in a diverse African American population. American Journal of Health Behavior, 22(6), 451–459. [Google Scholar]
- Bryant-Stephens T, Kurian C, & Chen Z (2011). Brief report of a low-cost street-corner methodology used to assess inner-city residents’ awareness and knowledge about asthma. Journal of Urban Health: bulletin of the New York Academy of Medicine, 88(Suppl 1), 156–163. 10.1007/s11524-010-9518-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buschmann A (2019). Conducting a street-intercept survey in an authoritarian regime: The case of Myanmar. Social Science Quarterly, 100(3), 857–868. 10.1111/ssqu.12611 [DOI] [Google Scholar]
- Center for Behavioral Health Statistics and Quality. (2020a). 2019 National Survey on Drug Use and Health Public Use File Codebook. https://www.datafiles.samhsa.gov/study/national-survey-drug-use-and-health-nsduh-2019-nid19014
- Center for Behavioral Health Statistics and Quality. (2020b). Results from the 2019 National Survey on Drug Use and Health: Detailed Tables. https://www.samhsa.gov/data/report/2019-nsduh-detailed-tables
- Christensen AI, Ekholm O, Gray L, Glumer C, & Juel K (2015). What is wrong with non-respondents? Alcohol-, drug- and smoking-related mortality and morbidity in a 12-year follow-up study of respondents and non-respondents in the Danish Health and Morbidity Survey. Addiction (Abingdon, England), 110(9), 1505–1512. 10.1111/add.12939 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen J (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. 10.1177/001316446002000104 [DOI] [Google Scholar]
- Davis RE, & Evans DN (2018). Associations between mass incarceration and community health in New York City. Public Health, 161, 43–48. 10.1016/j.puhe.2018.04.020 [DOI] [PubMed] [Google Scholar]
- Doody MM, Sigurdson AS, Kampa D, Chimes K, Alexander BH, Ron E, Tarone RE, & Linet MS (2003). Randomized trial of financial incentives and delivery methods for improving response to a mailed questionnaire. American Journal of Epidemiology, 157(7), 643–651. 10.1093/aje/kwg033 [DOI] [PubMed] [Google Scholar]
- Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, & Pratap S (2009). Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev, (3), Mr000008. 10.1002/14651858.MR000008.pub4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards P, Roberts I, Sandercock P, & Frost C (2004). Follow-up by mail in clinical trials: Does questionnaire length matter? Controlled Clinical Trials, 25(1), 31–52. 10.1016/j.cct.2003.08.013 [DOI] [PubMed] [Google Scholar]
- Krosnick JA (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. 10.1002/acp.2350050305 [DOI] [Google Scholar]
- Maclennan B, Kypri K, Langley J, & Room R (2012). Non-response bias in a community survey of drinking, alcohol-related experiences and public opinion on alcohol policy. Drug and Alcohol Dependence, 126(1–2), 189–194. 10.1016/j.drugalcdep.2012.05.014 [DOI] [PubMed] [Google Scholar]
- McCabe SE, & West BT (2016). Selective nonresponse bias in population-based survey estimates of drug use behaviors in the United States. Social Psychiatry and Psychiatric Epidemiology, 51(1), 141–153. 10.1007/s00127-015-1122-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meiklejohn J, Connor J, & Kypri K (2012). The effect of low survey response rates on estimates of alcohol consumption in a general population survey. PloS One, 7(4), e35527. 10.1371/journal.pone.0035527 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miech RA, Johnston LD, O’Malley PM, Bachman JG, Schulenberg JE, & Patrick ME (2020). Monitoring the Future national survey results on drug use, 1975–2019: Volume I, Secondary school students. Ann Arbor: Institute for Social Research, The University of Michigan. http://www.monitoringthefuture.org/pubs/monographs/mtf-vol1_2019.pdf [Google Scholar]
- Miller KW, Wilder LB, Stillman FA, & Becker DM (1997). The feasibility of a street-intercept survey method in an African-American community. American Journal of Public Health, 87(4), 655–658. 10.2105/AJPH.87.4.655 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Norwood MS, Hughes JP, & Amico KR (2016). The validity of self-reported behaviors: Methods for estimating underreporting of risk behaviors. Annals of Epidemiology, 26(9), 612–618. e612. 10.1016/j.annepidem.2016.07.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ (2018). Barriers to accurately assessing prescription opioid misuse on surveys. American Journal of Drug and Alcohol Abuse, 1–7. 10.1080/00952990.2018.1521826 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ, Barratt MJ, Coney L, & Martins SS (2017). Synthetic cannabinoid use among high school seniors. Pediatrics, 140(4), e20171330. 10.1542/peds.2017-1330 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ, Le A, & Cleland CM (2018). Nonmedical opioid use among electronic dance music party attendees in New York City. Drug and Alcohol Dependence, 186, 226–232. 10.1016/j.drugalcdep.2018.03.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ, Le A, Acosta P, & Cleland CM (2019). Consistency of self-reported drug use among electronic dance music party attendees. Drug and Alcohol Review, 38(7), 798–806. 10.1111/dar.12982 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ, Mauro P, Han B, & Martins SS (2017). Shifting characteristics of ecstasy users ages 12–34 in the United States, 2007–2014. Drug and Alcohol Dependence, 181, 20–24. 10.1016/j.drugalcdep.2017.09.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palamar JJ, Salomone A, & Keyes KM (2021). Underreporting of drug use among electronic dance music party attendees. Clinical Toxicology (Philadelphia, PA), 59(3), 185–192. 10.1080/15563650.2020.1785488 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Percy A, McAlister S, Higgins K, McCrystal P, & Thornton M (2005). Response consistency in young adolescents’ drug use self-reports: A recanting rate analysis. Addiction (Abingdon, England), 100(2), 189–196. 10.1111/j.1360-0443.2004.00943.x [DOI] [PubMed] [Google Scholar]
- Robinson-Cimpian JP (2014). Inaccurate estimation of disparities due to mischievous responders: Several suggestions to assess conclusions. Educational Researcher, 43(4), 171–185. 10.3102/0013189X14534297 [DOI] [Google Scholar]
- Sahlqvist S, Song Y, Bull F, Adams E, Preston J, & Ogilvie D, the iConnect consortium (2011). Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: Randomised controlled trial. BMC Medical Research Methodology, 11(1), 62. 10.1186/1471-2288-11-62 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Teter CJ, DiRaimo CG, West BT, Schepis TS, & McCabe SE (2020). Nonmedical use of prescription stimulants among US high school students to help study: Results from a National Survey. Journal of Pharmacy Practice, 33(1), 38–47. 10.1177/0897190018783887 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tourangeau R, Rips LJ, & Rasinski K (2006). The psychology of survey response (6th ed.). Cambridge University Press. [Google Scholar]
- Wu LT, Schlenger WE, & Galvin DM (2006). Concurrent use of methamphetamine, MDMA, LSD, ketamine, GHB, and flunitrazepam among American youths. Drug and Alcohol Dependence, 84(1), 102–113. 10.1016/j.drugalcdep.2006.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
