Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Aug 1.
Published in final edited form as: Exp Clin Psychopharmacol. 2022 Aug;30(4):379–380. doi: 10.1037/pha0000582

Crowdsourcing Methods in Addiction Science: Emerging Research and Best Practices

Justin C Strickland 1, Michael Amlung 2, Derek D Reed 3
PMCID: PMC10080397  NIHMSID: NIHMS1885057  PMID: 35862134

Abstract

Crowdsourcing platforms such as Amazon Mechanical Turk, Prolific, and Qualtrics Panels have become a dominant form of sampling in recent years. Crowdsourcing enables researchers to effectively and efficiently sample research participants with greater geographic variability, access to hard-to-reach populations, and reduced costs. These methods have been increasingly used across varied areas of psychological science and essential for research during the COVID-19 pandemic due to their facilitation of remote research. Recent work documents methods for improving data quality, emerging crowdsourcing platforms, and how crowdsourcing data fit within broader research programs. Addiction scientists will benefit from the adoption of best practice guidelines in crowdsourcing as well as developing novel approaches, venues, and applications to advance the field.

Keywords: crowdsourcing, methods, mTurk, Prolific, validity


With the changing landscape of work amidst the global COVID-19 pandemic and increasing costs associated with collecting data from large participant samples, researchers are turning to alternative methods for recruiting participants and collecting data. Accordingly, crowdsourcing platforms such as Amazon Mechanical Turk, Prolific, and Qualtrics Panels have become a dominant form of sampling in recent years (Strickland & Stoops, 2019). These crowdsourcing platforms enable researchers to continue to collect data from large samples of human participants when face-to-face visits are challenging, cost-prohibitive, or infeasible due to other barriers (e.g., research sites at logistically prohibitive distances; social distancing requirements during COVID-19). Alongside optimism about the practical benefits that crowdsourcing may provide are uncertainties about the validity of these approaches and how they can (and cannot) be used. This Special Issue includes a collection of papers on best practices and emerging research using crowdsourcing for addictions research.

Papers included in this Special Issue emphasize that data quality and control methods are critical in crowdsourcing platforms to ensure data are reliable and valid. Jones et al. (2022) summarize the importance of this work using a meta-analysis of careless responding in crowdsourced alcohol use research and find that approximately 12% of participants are classified as careless responders. They also provide practical recommendations to address these issues to include the use of both overt and covert fidelity measures. Belliveau and Yakovenko (2022a) complement these findings by providing a practical implementation guide with step-by-step instructions for screening for speeding, straight-lining (i.e., tendency to make the same response in a group of questions), inconsistent responding, nonsensical responding, and missing data. Open-source code for conducting these procedures is provided for those looking to adopt these methods in their own work. Conceptually related behavioral economic research shows the practical implications of these quality checks. Craft et al. (2022) find that delay discounting data from participants failing systematicity checks did not differ from randomly generated data highlighting the importance of screening and removing data with a priori validity checks. Freitas-Lemos et al. (2022) describe how a novel check based on instructional understanding differentiated participants on consistency of cigarette use reporting, responding on a cigarette demand task, and the relationship between use behavior and demand data.

As the methods to screen for quality data have improved, so have the venues in which crowdsourcing has been applied. Historically, Amazon Mechanical Turk was the primary crowdsourcing outlet for psychological and addiction science research. Belliveau and Yakovenko (2022b) describe the use of the more recently developed Qualtrics Panels resource to study behavioral addictions (video gaming and gaming disorder). They find that Qualtrics data offered a participant pool similar in demographics to a community-recruited population supporting feasibility and potential usefulness of the resource. Stanton et al. (2022) show how another novel platform, Prolific, can facilitate repeated measures data in two protocols: a 5-day daily diary protocol and a test-retest protocol. They explain across these two independent studies how Prolific-recruited participants provided valid data consistent with theoretical expectations and afforded efficient collection of longitudinal outcomes. Beyond online platforms like Amazon Mechanical Turk, Qualtrics Panels, and Prolific, Pennington et al. (2022) describe how big team science may efficiently crowdsource researchers with the goal of producing reproducible projects conducted across varied institutions. A review of existing work using big team, crowdsourced approaches (e.g., ManyLabs, Psychological Science Accelerator) as well as a novel approach used by the study team are described.

How crowdsourcing fits within a broader research program is ultimately varied and may include pilot projects, methods development, intervention deployment, and more. Rzeszutek et al. (2022) provide one example of how crowdsourcing may be used to evaluate novel behavioral outcomes by studying cross-drug withdrawal effects for cigarette and opioids. They use a behavioral economic framework to show that opioid withdrawal may increase cigarette valuation, thereby providing a pathway for future treatment development work to build upon. Borodovsky (2022) integrates the above work to describe differences between generalizability and representativeness. A clear and concise review of these concepts is provided along with how such differences may inform the boundary conditions under which Internet-based research may (or may not) advance the literature.

Given the rapid emergence and evolution of crowdsourcing research platforms, it is safe to assume this is a method that is here to stay. This Special Issue highlights best practices for using crowdsourcing in addiction science while also drawing attention to important methodological and conceptual limitations. We hope that addiction scientists continue to adhere to these guidelines while pushing the boundaries of possible work within crowdsourcing science.

Public Health Significance.

The following set of papers in this Special Issue describes best practice methods and novel applications of crowdsourcing in addiction and psychological science. These papers advance the field and present practical guidelines and open-source resources for researchers using crowdsourcing in future work.

Acknowledgments

The authors have no financial conflicts of interest in regard to this editorial introduction. Dr. Strickland’s contribution was partially supported by NIDA grant R03DA054098. Dr. Amlung’s contribution was partially supported by NIAAA grant R01AA027255 and The Cofrin Logan Center for Addiction Research and Treatment at the University of Kansas.

Contributor Information

Justin C. Strickland, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, 5510 Nathan Shock Drive, Baltimore, MD 21224, USA.

Michael Amlung, Department of Applied Behavioral Science, University of Kansas, and Cofrin Logan Center for Addiction Research and Treatment, Lawrence, KS..

Derek D. Reed, Department of Applied Behavioral Science, University of Kansas, and Cofrin Logan Center for Addiction Research and Treatment, Lawrence, KS.

References

  1. Belliveau J, & Yakovenko I (2022a). Evaluating and improving the quality of survey data from panel and crowd-sourced samples: A practical guide for psychological research. Experimental and Clinical Psychopharmacology. [DOI] [PubMed] [Google Scholar]
  2. Belliveau J, & Yakovenko I (2022b). The validity of Qualtrics panel data for research on video gaming and gaming disorder. Experimental and Clinical Psychopharmacology. [DOI] [PubMed] [Google Scholar]
  3. Borodovsky JT (2022). Generalizability and representativeness: considerations for internet-based research on substance use behaviors. Experimental and Clinical Psychopharmacology. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Craft WH, Tegge AN, Freitas-Lemos R, Tomlinson DC, & Bickel WK (2022). Are poor quality data just random responses?: A crowdsourced study of delay discounting in alcohol use disorder. Experimental and Clinical Psychopharmacology. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Freitas-Lemos R, Tegge AN, Craft WH, Tomlinson DC, Stein JS, & Bickel WK (2022). Understanding data quality: Instructional comprehension as a practical metric in crowdsourced investigations of behavioral economic cigarette demand. Experimental and Clinical Psychopharmacology. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Jones A, Earnest J, Adam M, Clarke R, Yates J, & Pennington CR (2022). Careless responding in crowdsourced alcohol research: A systematic review and meta-analysis of practices and prevalence. Experimental and Clinical Psychopharmacology. [DOI] [PubMed] [Google Scholar]
  7. Pennington CR, Jones AJ, Tzavella L, Chambers CD, & Button KS (2022). Beyond online participant crowdsourcing: The benefits and opportunities of big team addiction science. Experimental and Clinical Psychopharmacology. [DOI] [PubMed] [Google Scholar]
  8. Rzeszutek MJ, Gipson-Reichardt CD, Kaplan BA, & Koffarnus MN (2022). Using crowdsourcing to study the differential effects of cross-drug withdrawal for cigarettes and opioids in a behavioral economic demand framework. Experimental and Clinical Psychopharmacology [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Stanton K, Carpenter RW, Nance M, Sturgeon T, & Villalongo Andino M (2022). A multisample demonstration of using the prolific platform for repeated assessment and psychometric substance use research. Experimental and Clinical Psychopharmacology. [DOI] [PubMed] [Google Scholar]
  10. Strickland JC, & Stoops WW (2019). The use of crowdsourcing in addiction science research: Amazon Mechanical Turk. Experimental and Clinical Psychopharmacology, 27(1), 1–18. [DOI] [PubMed] [Google Scholar]

RESOURCES