Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 1.
Published in final edited form as: Subst Abus. 2020;41(3):283–285. doi: 10.1080/08897077.2020.1784362

On the efficacy of online drug surveys during the time of COVID-19

Joseph J Palamar 1, Patricia Acosta 1
PMCID: PMC7424860  NIHMSID: NIHMS1614445  PMID: 32697173

Abstract

Most human subjects research involving contact with participants has been halted in the US due to the COVID-19 crisis. We have been testing an online method to recruit and survey participants as a temporary replacement for our street-intercept survey method. Online surveys already generate less generalizable findings than other surveys, but offering compensation for online survey completion further reduces generalizability because this increases mischievous submissions. In this letter we discuss methods to help detect invalid responses, such as utilizing a screener to test for eligibility and using flags to detect mischievous responses and repeat submissions. We recommend that researchers approach online recruitment and surveying with caution.

Keywords: Survey reliability, online surveys, COVID-19

Introduction

In response to the COVID-19 crisis, most universities in the US have halted “non-essential” human subjects research requiring human contact. This severely limits our ability to conduct research with participants, especially when a study relies on street recruitment and/or intercept surveys. In response to the halting of such recruitment, we are testing an online method to recruit and survey individuals in a specific high-risk population about their drug use. Although online surveys have many limitations, particularly with regard to generalizability, they can have efficacy, particularly when limited to specific populations and when data are not used to estimate prevalence.17 However, offering monetary compensation for survey completion, like we do in our study, increases risk for repeat and mischievous responders who seek to benefit from pitfalls in the study design.7

People who use drugs are already considered a hard-to-reach population, but given COVID-19-related social isolation, this population has suddenly become much more hard-to-reach. Therefore, online recruitment may be the most ideal method to rapidly assess drug use behavior during the crisis. In this letter, we briefly describe our new online recruitment and survey methods to inform other researchers who may also have to rely on such methods.

A brief description of our study

The aim of our parent study is to estimate trends in drug use among adults who attend electronic dance music (EDM) parties in New York City. We are funded to survey individuals entering such parties every week for 24 months. Each weekend we survey individuals entering randomly selected EDM parties. To be eligible, individuals must be age ≥18 and about to enter the selected party. The survey focuses on use of about different 100 different drugs and typically takes 10–15 min to complete. Those who complete the survey are compensated $10 USD. Survey response rates were 65%. Due to the COVID-19 crisis, our work was halted in early March. Thus far, we have surveyed some 1,100 participants, with our overall target being 2,080 participants by early 2021. Further information about study methods can be found elsewhere.8

A few weeks after human subjects research requiring contact with participants was halted, we obtained IRB approval to recruit and survey participants online. This method was proposed to: 1) continue to examine drug use in this population during the crisis, and 2) examine how drug use behavior has changed during COVID-related social distancing.

Online recruitment and inclusion criteria

Since online surveys are less generalizable than most other types of surveys, we made inclusion criteria more stringent than in the parent study. Specifically, in addition to excluding anyone age <18, to be eligible, individuals must: 1) report having attended an EDM party in the past six months, 2) report living in New York, and 3) report past-3-month use of at least one of seven drugs queried on the screener. These criteria were added to make the study population more specific, and requiring drug use allows us to examine changes in drug use among people who use. Thus, we would not use such data to make generalizations to the full EDM party-attending population or to compute prevalence estimates as we do in our parent study.

Our study flyer was posted on social media sites and directed toward our population of interest (EDM party attendees). The flyer advertises that we are seeking EDM party attending adults who live in NYC and that we are seeking to examine changes in drug use due to the COVID-19 crisis. In the flyer we note that those who are eligible for and complete the full survey will be compensated with a $10 online gift certificate (from a vendor such as Amazon). Since we offer compensation for completing the full survey, we expected to receive an abundance of mischievous and repeat responses.

The addition of a screener

Our first step to detect problematic responders was to require interested individuals to complete a two-minute online screener survey. This was added to help us detect mischievous and repeat responders. Not only do those who take the screener have to meet eligibility criteria, but we also created a system to flag untrustworthy responses. At the beginning of the screener we ask participants how honestly they plan to answer the questions. Many participants actually admit to providing incorrect or dishonest information on our street surveys, and we have found that reports of not answering all questions honestly are associated with inconsistent information provided on the survey.9,10 In the section about drug use we include a fictitious drug called nadro-pax.11 Reporting use of this drug indicates overreporting and such a response suggests the participant is not answering honestly or carefully.

We also detect mischievous reporting by examining extreme responses to non-drug-related questions. For example, participants who report being legally blind or legally deaf, and those who report having an extreme weight (e.g., <75 lbs. or >400 lbs.), and/or an extreme number of siblings (i.e., >10) are flagged for possible mischievous responding.12 Finally, at the end of the screener we request participants to enter their email address so we can send them the full survey if deemed eligible. This not only increases the likelihood of us acquiring more dedicated participants, but this is also another level that mischievous responders must overcome in order to fool the system.

Additional checks to determine whether screener responses are valid

Once we determine who appears to be eligible for the full survey, we conduct additional checks to detect untrustworthy responses. First, we search for repeat submissions. This is done by determining whether the same email address was submitted more than once. We also acquired IRB approval to record IP addresses and this allows us to determine whether multiple surveys were submitted from the same location. Indeed, people may share the same IP address or even share the same computer, so IP addresses have limited efficacy, but they are still useful in detecting repeat responses. We have had occasions in which as many as 15 screeners were submitted from the same IP address within a short period of time. We also enter each IP address into a system such as Scamalytics which generates a fraud risk score. Perhaps uncoincidentally, the same IP addresses with multiple submissions also tend to have high fraud risk scores. These systems also allow you to detect from which city or country the screener was submitted. Some IP addresses of people who claim to live in NYC register as coming from foreign countries.

We also examine whether there are clusters of submissions within short periods of time, and we determine whether there are any consistent or odd patterns in email addresses submitted within short time frames. As expected, we have been detecting multiple submissions within short periods of time, and one night we were bombarded with 198 mischievous responses back to back, many of which included self-reported use of the fictious drug. Email addresses associated with these clusters of submissions also tend to have similar patterns. For example, we have received clusters of email addresses which were similarly formatted, such as JohnDoexx12, JaneSmithxx13, and BobSmithxx06, and they were always from the same service provider. Such email addresses tend to be from Yahoo, perhaps because this company allows users to generate up to 500 email addresses at once.

Finally, we examine text box responses for an open-ended question. As such, we search for the same answer typed in, particularly within a cluster during a short period of time. In doing so, we sometimes detect the same drug names typed or pasted in, with the same problematic or idiosyncratic spelling, wording, and grammar. This, combined with all the other measures discussed above, allow us to weed out potentially devious responders. Thus far, we have received 786 completed screeners, and given the criteria described above, we have determined that only 171 (22%) were eligible for the full survey.

The full survey

Only those deemed eligible and trustworthy are invited to take the full survey. Like in previous studies,9,13 we require that participants enter their unique study ID (which we assign them in the email containing the link to the survey). Demographic information provided on the survey must match the information provided on the screener. If this information does not match then participants are not compensated and their data are not considered for analysis. This is clearly noted on the informed consent form. Those who submit conflicting information regarding drug use are still compensated, but these individuals are flagged and may not be considered for analysis. Those who complete the full survey enter their email address again, and we email them their online gift certificate within 24 hours.

We must be cautious when recruiting and surveying online

Researchers need to remain cognizant of the possible pitfalls regarding online surveys, but during the time of COVID-19 such methods may be a last resort. Researchers also need to consider the tradeoffs when deciding whether or not to compensate participants who take online surveys. On the one hand, surveys that do not compensate appear less likely to receive mischievous responses,7 but then it is more difficult to get individuals to agree to take a long survey. On the other hand, offering compensation increases interest, but it also attracts people willing to deceive the system in order to acquire monetary compensation.

In this letter we described our current online recruitment and survey methods to inform researchers who may be considering utilizing online methods. Although online recruitment is optimal in some cases, it is typically suboptimal for obtaining highly generalizable data. Thus, if highly generalizable data are needed (e.g., to estimate prevalence) then we recommend against using online platforms. However, online surveying may be a researcher’s last resort when trying to obtain timely information. As such, we believe many limitations of online surveys can be eliminated through use of some methods we used in this study including more stringent inclusion criteria, screeners requiring submission of an email address, and items to flag mischievous reporting. Such stringent criteria led us to determine that only 22% of those screened have been eligible for the full survey. If caution is taken, such survey results should be able to inform us about associations within a given population in a timely manner. We highly recommend that researchers conduct such online research with an abundance of caution.

Acknowledgments

Funding

This project was funded by the National Institutes of Health [R01 DA044207, PI: Palamar]. The funding organization had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Footnotes

Disclosure statement

The authors do not have any declarations of interest to declare.

References

  • [1].Miller PG, Sonderlund AL. Using the internet to research hidden populations of illicit drug users: a review. Addiction. 2010; 105(9):1557–1567. [DOI] [PubMed] [Google Scholar]
  • [2].Rothman KJ, Gallacher JE, Hatch EE. Rebuttal: when it comes to scientific inference, sometimes a cigar is just a cigar. Int J Epidemiol. 2013;42(4):1026–1028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Richiardi L, Pizzi C, Pearce N. Commentary: representativeness is usually not necessary and often should be avoided. Int J Epidemiol. 2013;42(4):1018–1022. [DOI] [PubMed] [Google Scholar]
  • [4].Nohr EA, Olsen J. Commentary: Epidemiologists have debated representativeness for more than 40 years-has the time come to move on? Int J Epidemiol. 2013;42(4):1016–1017. [DOI] [PubMed] [Google Scholar]
  • [5].Ebrahim S, Davey Smith G. Commentary: should we always deliberately be non-representative? Int J Epidemiol. 2013;42(4):1022–1026. [DOI] [PubMed] [Google Scholar]
  • [6].Keiding N, Louis TA. Perils and potentials of self-selected entry to epidemiological studies and surveys. J R Stat Soc A. 2016; 179(2):319–376. [Google Scholar]
  • [7].Barratt MJ, Ferris JA, Zahnow R, Palamar JJ, Maier LJ, Winstock AR. Moving on from representativeness: testing the utility of the global drug survey. Subst Abuse. 2017;11: 117822181771639 1178221817716391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Palamar JJ, Le A, Acosta P. Posting, texting, and related social risk behavior while high. Subst Abus. 2019:1–9. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Palamar JJ, Le A, Acosta P, Cleland CM. Consistency of self-reported drug use among electronic dance music party attendees. Drug Alcohol Rev. 2019;38(7):798–806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Palamar J, Le A. Self-correction of unreported marijuana use by participants taking a street intercept survey. Am J Drug Alcohol Abuse. 2020. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Fernandez-Calderon F, Cleland CM, Palamar JJ. Polysubstance use profiles among electronic dance music party attendees in New York City and their relation to use of new psychoactive substances. Addict Behav. 2018;78:85–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Furlong MJ, Fullchange A, Dowdy E. Effects of mischievous responding on universal mental health screening: I love rum raisin ice cream, really I do! Sch Psychol Q. 2017;32(3):320–335. [DOI] [PubMed] [Google Scholar]
  • [13].Palamar JJ, Acosta P, Cleland CM. Planned and unplanned drug use during a night out at an electronic dance music party. Subst Use Misuse. 2019;54(6):885–893. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES