Skip to main content
PLOS One logoLink to PLOS One
. 2023 Oct 4;18(10):e0292372. doi: 10.1371/journal.pone.0292372

Effects of pay rate and instructions on attrition in crowdsourcing research

Carolyn M Ritchey 1,*, Corina Jimenez-Gomez 2, Christopher A Podlesnik 2
Editor: Jacinto Estima3
PMCID: PMC10550147  PMID: 37792848

Abstract

Researchers in social sciences increasingly rely on crowdsourcing marketplaces such as Amazon Mechanical Turk (MTurk) and Prolific to facilitate rapid, low-cost data collection from large samples. However, crowdsourcing suffers from high attrition, threatening the validity of crowdsourced studies. Separate studies have demonstrated that (1) higher pay rates and (2) additional instructions–i.e., informing participants about task requirements, asking for personal information, and describing the negative impact of attrition on research quality–can reduce attrition rates with MTurk participants. The present study extended research on these possible remedies for attrition to Prolific, another crowdsourcing marketplace with strict requirements for participant pay. We randomly assigned 225 participants to one of four groups. Across groups, we evaluated effects of pay rates commensurate with or double the US minimum wage, expanding the upper range of this independent variable; two groups also received additional instructions. Higher pay reduced attrition and correlated with more accurate performance on experimental tasks but we observed no effect of additional instructions. Overall, our findings suggest that effects of increased pay on attrition generalize to higher minimum pay rates and across crowdsourcing platforms. In contrast, effects of additional instructions might not generalize across task durations, task types, or crowdsourcing platforms.

Introduction

Over the last decade, crowdsourcing marketplaces such as Amazon Mechanical Turk (MTurk) and Prolific have provided participants for up to half of published studies in psychology and other social sciences [1]. MTurk is by far the most popular recruitment platform to offer rapid data collection from large samples at a relatively low cost and increased diversity of samples beyond the typical college undergraduate population. Prolific offers many of the same benefits as MTurk, but is geared toward academic researchers. Of course, these and other crowdsourcing platforms are not entirely distinct, given that there is likely some overlap in participant populations. Moreover, the characteristics of each of these platforms is likely to vary significantly over time. For example, Difallah and colleagues reported that the MTurk participant pool is largely comprised of new individuals every few years, with the half-life of participants ranging between 12 and 18 months [2]. Nevertheless, some crowdsourcing platforms have distinct features that might make recruitment more convenient for psychological research. For example, Prolific uniquely offers participant prescreening based on multiple factors, including self-reported clinical diagnoses (e.g., autism, depression).

Despite the many benefits of using crowdsourcing in psychological research, one general limitation of such online research is attrition. Attrition can occur for many reasons–participants might drop out of a study due to an experimental condition (e.g., punishment versus no punishment [3]), exogenous factors such as interruptions (e.g., attending to a child), or technical difficulties. Anonymity in online research also minimizes the social cost of dropping out of a study. Attrition in online experiments is significant not only because of its potential to increase the time and cost of conducting online research, but also because it can jeopardize internal validity when it occurs disproportionally across experimental conditions.

Research suggests that attrition rates are similar across studies conducted using both Prolific and MTurk. For example, Palan and Schitter reported ~25% attrition after two weeks for inexperienced Prolific participants–i.e., participants who had completed only one study on that platform [4]. Kothe & Ling also reported 25% attrition in a Prolific study after one year and ~40% attrition after 19 months [5]. Attrition rates are similarly high in MTurk studies spanning several weeks to months (30%) or up to one year (55%) [6]. However, attrition often is not reported in shorter-term studies with MTurk [7, 8] or Prolific.

A few studies, however, examined possible remedies for attrition in online research. For example, Zhou and Fishbach examined the effects of additional instructions with MTurk participants, defined as (1) a prewarning–i.e., informing participants about the task, (2) personalization–i.e., asking for personal information (MTurk ID) on the consent form, and (3) appealing to conscience–i.e., explaining the negative effects of attrition on research quality [8]. The combination of these instructions reduced attrition by more than half in a 5-min survey task compared to no additional instructions (from ~50% to ~20%) [911].

Increasing pay is a second empirically validated method for reducing attrition in MTurk studies [7, 12]. For example, Auer et al. evaluated attrition in a two-part survey study, with each part lasting approximately 30 min [12]. Across groups, Auer et al. manipulated both initial pay for completing part 1 –ranging from $0.50 to $2.00 –and pay increase from part 1 to part 2, ranging from 0% to 400%. Both initial pay and pay increase significantly predicted attrition, with attrition ranging from 30% to 82% across the highest- and lowest-paid groups, respectively. In line with these findings, Crump et al. reported in a concept-learning task that attrition was higher in a low-incentive condition (total pay of $0.75; 25% attrition) compared to a high-incentive condition (total pay of up to $4.50; 11% attrition) [7]. This limited research suggests that advertised pay likely is an important factor in minimizing attrition among MTurk participants.

The purpose of the present study was to extend findings on effects of (1) higher pay and (2) additional instructions on attrition to Prolific participants. Given the benefits of some of this platform’s features for psychological researchers, we sought only to test these manipulations in novel ways within the Prolific platform. For example, we evaluated effects of pay and instruction manipulations alone and in combination, a novel contribution to this research area. We hypothesized that both higher pay and additional instructions would reduce attrition, with greater reductions in attrition when combining these manipulations.

Methods

Participants

The study protocol was approved by the Auburn University Institutional Review Board and all participants provided written consent for participation. A power analysis based on previous research [8, 12] indicated that a sample size of at least 37 participants per group would ensure adequate power (>.80) to detect an effect of pay with an effect size (f) of 0.23 and instructions with an effect size (w) of 0.57. We recruited a total of 225 participants from www.Prolific.co. Forty participants per group completed operant key-pressing tasks, and we collected demographic information from 152 participants (67.6%) completing the post-experiment survey. Those participants ranged in age from 18 to 63 (M = 31.5, SD = 10.8), and just over half identified as male (54.6%). The majority of those 152 participants (99.3%) resided in the United States, and one did not report this information. Detailed demographic information is included with supplemental materials. Although it is possible to prescreen participants on Prolific–for example, filtering out participants with low approval rates for previous work–we did not apply any prescreening filters. Research suggests that, unlike MTurk, Prolific generally provides high-quality data without prescreening [13].

Apparatus and procedure

We programmed the experimental tasks using Inquisit [14]. The experiment included two brief (~10 min) probabilistic reversal learning tasks commonly used to examine inhibitory control and behavioral flexibility [15, 16]. These included a two-choice (simultaneous) and successive (go/no-go) task, comprising Part 1 of the experiment–see S1 Appendix for a detailed description. Participants were directed to the tasks (hosted on www.millisecond.com) via the Prolific website, with task order counterbalanced across participants. Data from these experimental tasks have not been published elsewhere and were designed to test hypotheses presented in the current manuscript.

A 16-question post-experiment survey created using Qualtrics (https://www.qualtrics.com) included questions about demographics and how participants made decisions during the experimental task. Participants also completed the Autism-Spectrum Quotient (AQ), consisting of 50 questions used to evaluate traits associated with the autism spectrum in adults with normal intelligence (AQ) [17]. We included the AQ based on research demonstrating differences in performance in both go/no-go [18] and reversal learning tasks [19] among individuals with and without autism. The 66-question survey comprised Part 2 of the experiment. Survey questions and results are included with supplemental materials.

After completing both tasks through the Inquisit app, participants received a unique completion code to submit via the Prolific website. Within 24 hours of completing Part 1, we provided a link to Part 2. For up to three weeks, participants received reminders every 36 hours to complete Part 2 via the Prolific website.

We monitored attrition by counting the number of dropouts, defined as the number of participants that either (1) had not completed Part 2 after three weeks or (2) returned their submission, meaning that the participant decided to not to complete the study or withdrew their submission following completion (see https://researcher-help.prolific.co/hc/en-gb/articles/360009259434-Returned-submission-status).

Group assignment

Participants were randomly assigned to one of four groups. There were 40 individual-participant ‘slots’ per group. However, upon returning a submission, Prolific automatically made the vacant slot available to a new participant. Thus, n sizes were somewhat unequal across groups. All groups received the same set of instructions on their Prolific page–see S1 Appendix for details. Upon beginning Part 1, two groups (Groups Low Pay + Minimal Instructions and High Pay + Minimal Instructions) received the following instructions:

This is a two-part study. BE SURE TO COMPLETE BOTH PARTS. ***YOU WILL NOT RECEIVE PAYMENT UNTIL YOU HAVE COMPLETED PART 2.***

Note that this approach differs from Auer et al., who paid participants upon completion of each part of the study [12]. The two remaining groups (Groups Low Pay + Added Instructions and High Pay + Added Instructions) received the following additional instructions:

This is a two-part study consisting of (1) two short games and (2) a 66-question survey. BE SURE TO COMPLETE BOTH PARTS. ***YOU WILL NOT RECEIVE PAYMENT UNTIL YOU HAVE COMPLETED PART 2.*** Participants typically complete both parts in about 30 min. Some research participants find that the game is boring or repetitive and quit before it is finished. Others quit before completing the survey. If a sizable number of people quit before completing both tasks, the data quality will be compromised. However, our research depends on high-quality data. Thus, please make sure you do not mind completing the ENTIRE TASK before getting started.

Participants receiving these additional instructions could not proceed until they (1) typed in the statement “I will complete the entire task” and (2) provided their Prolific ID to consent to participate. Typing in the statement “I will complete the entire task” could be analogous to an honesty pledge in research on cheating [20]. Pledges might have different effects on behavior compared with other instructions described herein (prewarning, appeal to conscience). Nevertheless, we included the pledge to replicate methods used by Zhou and Fishbach [8]. Overall, these instructions included a prewarning (description of task), personalization (requirement to enter Prolific ID), and an appeal to conscience (instruction that dropping out could compromise data quality).

Low-pay groups (Low Pay + Minimal Instructions, n = 64, Low Pay + Added Instructions, n = 58) received pay advertised as $7.25/hr. High-pay groups (High Pay + Minimal Instructions, n = 51; High Pay + Added Instructions, n = 52) received pay advertised as $14.50/hr. Actual pay rates differed from the advertised pay rate due to variation in time required to complete the tasks among participants. The advertised pay rate was based on an estimated task duration of 20 min (Part 1) and 10 min (Part 2). Thus, participants in the high-pay groups earned $4.84 (Part 1) and $2.42 (Part 2) in base pay. Participants in the low-pay groups earned $2.82 (Part 1) and $1.21 (Part 2) in base pay. All participants were informed that points earned in Part 1 could result in additional payment (bonus payment). Participants were not informed of the bonus contingency, but bonuses were paid as $0.001 per point (maximum of ~$0.32). Participants received base pay (Part 1 and 2) and bonus payments within 36 hours of completing Part 2.

Data analysis

Given that the advertised pay rate differed from the actual pay rate among participants, we first evaluated the relation between actual pay rate for Part 1 and total points earned during the two tasks. These data were not normally distributed; thus, we performed a Spearman’s rank-order correlation. We conducted this analysis across all groups and within individual groups.

Next, we used a logistic regression analysis to ascertain the effects of pay (low pay, coded 0; high pay, coded 1) and the presence (coded 1) or absence of additional instructions (coded 0) on task completion (coded 1) or attrition (coded 0). More specifically, the goal of this analysis was to evaluate how pay, additional instructions, and the interaction between those variables influenced the odds of task completion. We performed this analysis in R [21] with the glm method included in the stats package.

Results

Table 1 provides a summary of attrition rates. The Low Pay + Minimal Instructions group demonstrated the highest attrition rate (43.8%). Attrition was slightly lower when combining low pay with additional instructions (36.2%). Finally, attrition was lowest in the high-pay groups with or without additional instructions (~23%).

Table 1. Attrition in each group.

Group n (Incl. Dropouts) Dropouts Attrition Rate
Part 1 Part 2
Low Pay + Minimal Instructions 64 24 4 43.75
Low Pay + Added Instructions 58 18 3 36.21
High Pay + Minimal Instructions 51 11 1 23.53
High Pay + Added Instructions 52 12 0 23.08

Actual completion times for both parts of the study were relatively consistent among groups, ranging from 27 to 30 min (Group High Pay + Added Instructions: M = 29.2 min, SD = 9.2; Group High Pay + Minimal Instructions: M = 26.8 min, SD = 5.2; Group Low Pay + Added Instructions: M = 30.2 min, SD = 9.2; Group Low Pay + Minimal Instructions: M = 28.8 min, SD = 6.1). Actual pay rates in all groups were higher than advertised pay rates but similar among High Pay groups (Group High Pay + Added Instructions: M = $15.72, SD = 2.8; Group High Pay + Minimal Instructions: M = $16.74, SD = 2.6) and Low Pay groups (Group Low Pay + Added Instructions: M = $7.52, SD = 1.7; Group Low Pay + Minimal Instructions: M = $7.88, SD = 1.5). Results of Spearman’s rank-order correlation analysis indicated a weak (albeit statistically significant) positive correlation between actual pay rate for Part 1 and performance accuracy, defined as total points earned during the two tasks, ρ(158) = .16, p = .046; see Fig 1. This finding suggests that performance accuracy increases in simple reversal learning tasks with increases in pay rate. There were no significant correlations between these variables within individual groups (data not shown).

Fig 1. Correlation between pay rate and total points.

Fig 1

Note. Perfectly accurate performance would result in ~324 points.

Finally, Table 2 shows the results of the logistic regression analysis. Only one factor (pay) significantly affected the odds of task completion. More specifically, an advertised base pay of $14.50/hr increased odds of task completion by a factor of 2.53 (95% CI [1.11, 5.77] compared to advertised base pay of $7.25/hr.

Table 2. Results of logistic regression.

Factor β (SE) Z p
Intercept 0.25 (0.25) 1.00 .319
High Pay 0.93 (0.42) 2.23* .026
Added Instructions 0.32 (0.37) 0.85 .397
High Pay*Added Instructions -0.29 (0.60) -0.49 .627

Discussion

We examined approaches to address both attrition and data quality when recruiting participants via crowdsourcing. Although there are many possible measures of data quality (e.g., number of characters on survey responses), performance accuracy on a simple operant task served as the measure of data quality in the present study. Instead of using MTurk, we recruited participants using Prolific, a crowdsourcing platform specifically catering to researchers. Our primary findings were that higher pay corresponded with reduced attrition and greater performance accuracy. Additional instructions did not affect these outcomes.

Additional instructions–that is, informing participants about the task, asking for personal information on the consent form, and describing the negative impact of attrition on research quality–did not impact attrition relative to minimal instructions (cf. [8]). There are several possible reasons for this null effect. Whereas prior research evaluated whether additional instructions could mitigate attrition during a 5-min survey at a pay rate of $6/hr [8], we arranged an operant behavioral task and survey that took approximately 30 min to complete, which is likely more representative of the type of task and duration typically arranged in experimental psychological research. We paid participants a minimum of $7.25/hr and up to double this rate. As a result, it is possible that additional instructions effectively reduce attrition only (1) when tasks are brief and/or (2) within a lower range of pay rates. For example, we saw a slight (but not statistically significant) reduction in attrition across low-pay groups earning $7.25/hr and experiencing the presence or absence of additional instructions (from 44% to 36%). However, attrition was similar in high-pay groups earning $14.50/hr regardless of instructions (~23%). Future research could examine effects of additional instructions across a range of task types, durations, and payment amounts.

Consistent with findings with participants recruited from MTurk [7, 12], results of the present study also suggested that increasing advertised pay rates from $7.25 to $14.50/hr could reduce attrition with Prolific participants. We extended prior research by using what might be considered a relatively high minimum-pay rate consistent with the US minimum wage (cf. [12]). In contrast, Auer et al. evaluated effects of hourly pay ranging between $1.11 and $10.27 on attrition, with most participants (89%) earning $5.75/hr or less [12]. These low pay rates reflect an important difference between MTurk and Prolific; that is, MTurk does not regulate participant payment. When recruiting participants via Prolific, it was not possible to examine effects of the lower pay rates used by Auer et al.; researchers were required to pay participants no less than $6.50/hr at the time that we collected our data in 2021. This has since been increased to $8/hr, as of June 2022. Nevertheless, our findings suggest that the mitigating effects of increased pay on attrition generalize to higher overall pay rates.

A related finding was that higher actual pay rates were associated with more accurate performance on common tasks used to assess inhibitory control and behavioral flexibility [22; cf. 23]. As with studies examining effects of pay on attrition, MTurk studies examining effects of pay on data quality have paid participants at low rates–i.e., below the US minimum wage of $7.25. For example, Buhrmester et al. examined effects of hourly pay ranging between $0.04 and $6 on alpha reliabilities in personality questionnaires as a measure of data quality, with nearly three-quarters of participants earning $1.20/hr or less [23]. While Buhrmester et al. demonstrated no effect of pay on data quality among US and non-US participants, our findings could suggest that this null effect was due to (1) the lower overall range of pay rates examined (but see [24]) or (2) differences among US and non-US participants [22, 25]. Related to the second point, previous research has shown that increasing hourly pay to $10 from a minimum rate of $4 [22] or $0.20 [25] results in higher-quality data among participants based in India but not in the US. The present study is the first to demonstrate that pay could also impact data quality among US-based participants when increasing hourly pay from $7.25 to double that rate.

One limitation of the present study is that demographic variables (e.g., age, sex, income) were not controlled for or assessed when examining performance accuracy. There is some evidence for age- and sex-related differences in performance on tasks assessing inhibitory control. For example, studies have shown that, among healthy participants completing a go/no-go task, older adults make fewer no-go errors compared to younger adults [26] and that women demonstrate greater inhibition on no-go trials compared to men [27]. In the present study, the demographic survey comprised second part of the study, and approximately one-third of participants dropped out before providing demographic information. We presented the demographic survey after the experimental tasks to mirror the presentation of tasks in previous crowdsourcing research [28, 29]. Nevertheless, it is possible that demographic variables and performance accuracy interact in ways that are not accounted for in the present study. Future researchers should evaluate how demographic variables influence the relation between pay rate and performance accuracy.

A second limitation of the present study is that we used a broad definition of attrition. That is, attrition included both failure to (1) complete an ongoing experimental task, and (2) return to the Prolific website to complete a second task (survey). Some researchers might be interested in addressing within-task attrition specifically, given that is rarely tracked or reported [7] and has raised significant validity concerns [8]. Thus, future researchers should consider parsing out the effect of independent variables such as pay rate on each form of attrition. Distinguishing between these forms of attrition would further improve our understanding of how manipulations such as increased pay could improve the quality of crowdsourcing research.

A third limitation is that participation in the present study required installing the Inquisit Player (https://www.millisecond.com/products/inquisitplayer). This software likely does not present issues with compatibility: it can be quickly and easily installed on Windows and Mac devices, as well as Android devices and Chromebooks. However, it is still possible that some participants did not complete the study because either they could not or did not want to install the software. Therefore, future studies might consider experimental tasks that are hosted on the web and do not require software installation [28].

Altogether, our findings demonstrated that paying participants at a rate well beyond the US minimum wage, and closer to a living wage (see livingwage.mit.edu/), not only reduces attrition but also is correlated with higher-quality performance. Low pay rates are commonplace on MTurk, with the average US-based MTurk worker earning a median wage of $3.01/hr, significantly below the minimum wage [30]. This is despite research showing that monetary compensation is the highest-rated motivation for study completion among US-based MTurk participants [25]. Findings from the present study build on previous research and suggest that fair compensation mutually benefits participants and researchers. For example, other research has shown that fair compensation could facilitate recruitment and reduce attrition in groups traditionally underrepresented in research, such as individuals of lower socio-economic status, who cannot afford to dedicate time to research in the absence of adequate payment [31]. The present findings further suggest that paying participants closer to a living wage not only mitigates threats to internal validity via reduced attrition but also could facilitate collection of higher-quality data. Beyond these benefits, researchers should strongly consider their ethical obligation to compensate crowdsourcing participants fairly, even when minimum payment requirements are not imposed (e.g., MTurk).

Supporting information

S1 Appendix. Additional task and participant information.

(DOCX)

Data Availability

All data are available at https://github.com/cmr0112/attrition.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Stewart N., Chandler J., & Paolacci G. (2017). Crowdsourcing samples in cognitive science. Trends in Cognitive Sciences, 21, 736–748. doi: 10.1016/j.tics.2017.06.007 [DOI] [PubMed] [Google Scholar]
  • 2.Difallah D., Filatova E., & Ipeirotis P. (2018, February 5–9). Demographics and dynamics of Mechanical Turk workers. In Proceedings of WSDM 2018: The Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA. 10.1145/3159652.3159661 [DOI] [Google Scholar]
  • 3.Arechar A.A., Gächter S., & Molleman L. (2018). Conducting interactive experiments online. Experimental Economics, 21, 99–131. doi: 10.1007/s10683-017-9527-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Palan S., & Schitter C. (2017). Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27. 10.1016/J.JBEF.2017.12.004 [DOI] [Google Scholar]
  • 5.Kothe E. J., & Ling M. (2019). Retention of participants recruited to a multi-year longitudinal study via Prolific. PsyArXiv, 6 Sept. 2019. 10.31234/osf.io/5yv2u [DOI] [Google Scholar]
  • 6.Chandler J., & Shapiro D. (2016). Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology, 12, 53–81. doi: 10.1146/annurev-clinpsy-021815-093623 [DOI] [PubMed] [Google Scholar]
  • 7.Crump M. J. C., McDonnell J. V., & Gureckis T. M. (2013). Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE, 8, e57410. doi: 10.1371/journal.pone.0057410 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zhou H., & Fishbach A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111, 493–504. doi: 10.1037/pspa0000056 [DOI] [PubMed] [Google Scholar]
  • 9.Musch J., & Klauer K. C. (2002). Psychological experimenting on the World Wide Web: Investigating content effects in syllogistic reasoning. In Batinic B., Reips U.-D., & Bosnjak M. (Eds.), Online social sciences (pp. 181–212). Hogrefe & Huber Publishers. [Google Scholar]
  • 10.Reips U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49, 243–256. doi: 10.1026//1618-3169.49.4.243 [DOI] [PubMed] [Google Scholar]
  • 11.Göritz A.S., & Stieger S. (2008). The high-hurdle technique put to the test: Failure to find evidence that increasing loading times enhances data quality in Web-based studies. Behavior Research Methods, 40, 322–327. doi: 10.3758/brm.40.1.322 [DOI] [PubMed] [Google Scholar]
  • 12.Auer E. M., Behrend T. S., Collmus A.B., Landers R. N., & Miles A. F. (2021) Pay for performance, satisfaction and retention in longitudinal crowdsourced research. PLoS ONE, 16, e0245460. doi: 10.1371/journal.pone.0245460 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Peer E., Rothschild D., Gordon A., Evernden Z., & Damer E. (2022). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 54, 1643–1662. doi: 10.3758/s13428-021-01694-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Inquisit 6 [Computer software]. (2021). Retrieved from https://www.millisecond.com.
  • 15.Aron A. R., & Poldrack R. A. (2005). The cognitive neuroscience of response inhibition: relevance for genetic research in attention-deficit/hyperactivity disorder. Biological Psychiatry, 57, 1285–1292. doi: 10.1016/j.biopsych.2004.10.026 [DOI] [PubMed] [Google Scholar]
  • 16.Izquierdo A., & Jentsch J. D. (2012). Reversal learning as a measure of impulsive and compulsive behavior in addictions. Psychopharmacology, 219, 607–620. doi: 10.1007/s00213-011-2579-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Baron-Cohen S., Wheelwright S., Skinner R., Martin J., & Clubley E. (2001). The autism-spectrum quotient (AQ): evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders, 31, 5–17. doi: 10.1023/a:1005653411471 [DOI] [PubMed] [Google Scholar]
  • 18.Uzefovsky F., Allison C., Smith P., & Baron-Cohen S. (2016). Brief report: the Go/ No-Go task online: inhibitory control deficits in autism in a large sample. Journal of Autism and Developmental Disorders, 46, 2774–2779. doi: 10.1007/s10803-016-2788-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.D’Cruz A. M., Ragozzino M. E., Mosconi M. W., Shrestha S., Cook E. H., & Sweeney J. A. (2013). Reduced behavioral flexibility in autism spectrum disorders. Neuropsychology, 27, 152–160. doi: 10.1037/a0031721 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Peer E. & Feldman Y. (2021) Honesty pledges for the behaviorally-based regulation of dishonesty. Journal of European Public Policy, 28, 761–781. 10.1080/13501763.2021.1912149 [DOI] [Google Scholar]
  • 21.R Core Team. (2021). R: A language and environment for statistical computing [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/ [Google Scholar]
  • 22.Aker A., El-Haj M., Albakour M-D., & Kruschwitz U. (2012). Assessing crowdsourcing quality through objective tasks. In: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012), Istanbul, Turkey, pp. 1456–1461. European Language Resources Association (ELRA) [Google Scholar]
  • 23.Buhrmester M., Kwang T., & Gosling S. D. (2011). Amazon’s Mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5. doi: 10.1177/1745691610393980 [DOI] [PubMed] [Google Scholar]
  • 24.Andersen D., & Lau R. (2018). Pay rates and subject performance in social science experiments using crowdsourced online samples. Journal of Experimental and Political Science, 5, 217–229. 10.1017/XPS.2018.7 [DOI] [Google Scholar]
  • 25.Litman L., Robinson J., & Rosenzweig C. (2015). The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk. Behavior Research Methods, 47, 519–528. doi: 10.3758/s13428-014-0483-x [DOI] [PubMed] [Google Scholar]
  • 26.Maillet D., Yu L., Hasher L., & Grady C. L. (2020). Age-related differences in the impact of mind-wandering and visual distraction on performance in a go/no-go task. Psychology and Aging, 35, 627–638. doi: 10.1037/pag0000409 [DOI] [PubMed] [Google Scholar]
  • 27.Sjoberg E.A., & Cole G.G. (2018). Sex differences on the go/no-go test of inhibition. Archives of Sexual Behavior, 47, 537–542. doi: 10.1007/s10508-017-1010-9 [DOI] [PubMed] [Google Scholar]
  • 28.Podlesnik C. A., Ritchey C. M., Kuroda T., & Cowie S. (2022). A quantitative analysis of the effects of alternative reinforcement rate and magnitude on resurgence. Behavioural Processes, 198, 104641. doi: 10.1016/j.beproc.2022.104641 [DOI] [PubMed] [Google Scholar]
  • 29.Ritchey C.M., Gilroy S.P., Kuroda T. & Podlesnik C. A. (2022). Assessing human performance during contingency changes and extinction tests in reversal-learning tasks. Learning & Behavior, 50, 494–508. doi: 10.3758/s13420-022-00513-9 [DOI] [PubMed] [Google Scholar]
  • 30.Hara K., Adams A., Milland K., Savage S., Hanrahan B. V., Bigham J. P., et al. (2019). Worker demographics and earnings on Amazon Mechanical Turk: an exploratory analysis. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 1–6. ACM Inc. 10.1145/3290607.3312970 [DOI] [Google Scholar]
  • 31.Bierer B. E., White S. A., Gelinas L., & Strauss D. H. (2021). Fair payment and just benefits to enhance diversity in clinical research. Journal of Clinical and Translational Science, 5, e159. doi: 10.1017/cts.2021.816 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Jacinto Estima

22 Jun 2023

PONE-D-23-00174Effects of Pay Rate and Instructions on Attrition in Crowdsourcing Research

PLOS ONE

Dear Dr. Ritchey,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

In the revised version, please address all the comments provided by the reviewers. Pay particular attention to reviewer 2 and provide any information regarding the qualifications of the Prolific participants, as well as some form of systematic screening of the qualitative responses to rule out the potential for any inattentive data. These are detailed in the reviewer comments.

Please submit your revised manuscript by Aug 06 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jacinto Estima

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Given the increased reliance on the convenience and cost-effectiveness of rapidly recruiting large sample sizes using commercial market crowdsourcing agents, the manuscript addresses an important and highly relevant issue in psychology. The primary aim is to evaluate the impact of differing levels of payment and instructions on attrition in one market research sourced (Prolific) sample. The findings are important is offering strategies to reduce attrition rates using appropriate pay scales, and the minimal effects of additional information appealing to the research need for participants to complete the surveys/tasks.

The title is clear and concise in accurately setting out the study’s topic. The Abstract is clear but could benefit from the inclusion of the actual sample size employed in the study.

Overall, the manuscript is well written, clear in its objectives regarding factors affecting attrition, and the measures and statistics are appropriate. The limitations are noted.

There are several points that is offered for consideration by the authors. These are designed to enhance the overall quality and clarity of some aspects of its content.

The first sentence of the manuscript is unclear as to whether the authors are asserting that half of all participants for research in psychology and social sciences are recruited through crowdsourcing, or if half of all research studies in psychology and social sciences use crowdsourcing to recruit participants.

Related to this, is there any time frame as to the assertion underlying this claim given that crowdsourcing is relatively recent compared to studies undertaken in psychology and social sciences since the turn of the last century? Clearly there is a trend to use crowdsourcing but when did this generally commence?

On page 9 and elsewhere, it is claimed that the findings suggest that performance accuracy increases in simple reversal learning tasks with increased pay rates, and that data quality was assessed. There is a need to include more information regarding the definition and criteria used by the authors to assess both accuracy and data quality. Was it simply based on obtained scores on these tasks, and could other factors be excluded from influencing such performance (e.g., subsample characteristics given we do not have data on age and sex distribution)?

On page 11, ‘…hourly pay with ranging between…” should read ‘…hourly pay with ranging between…”.

Regarding the conclusion section, it would be interesting (optional) for the authors to comment on the implications of their findings linking higher pay rates to lower attrition for research costs. In their introduction, the authors rightly note that one benefit to researchers for using crowdsourced samples is the relatively low cost. This benefit is potentially compromised given the study’s findings. For example, using pay in the vicinity of under $1.00 to around $7 per participant is cost-effective but once the rates increase to around $15-17, the cost to researchers becomes quite considerable once samples reach n=1,000+. Do researchers need to sacrifice attrition in preference for low costs in participant recruitment and is this a scientifically justified course of action? The question of ethics of paying miniscule pay rates to participants for their time is another issue for consideration.

Reviewer #2: I appreciated the opportunity to review Ritchey and colleagues’ manuscript, “Effects of Pay Rate and Instructions on Attrition in Crowdsourcing Research,” submitted for consideration as a Research Article in PLOS ONE. As a crowdsourcing researcher myself, I found the topic to be interesting and important. Indeed, much of the work on the structure and logistics of crowdsourcing research was collected in the early days of Amazon Mechanical Turk. Much has changed in the crowdsourcing research landscape, particularly with the emergence of new competitor platforms such as Prolific (used in the present study).

Despite my enthusiasm for this topic, there are some procedural issues that limit the contributions of this work—however, these *might* be resolvable in revision. I thereby recommend: Major Revision.

What follows are the major concerns guiding my recommendation, along with recommendations where relevant.

1. The authors do not provide any information regarding the qualifications of the Prolific participants. When constructing a Prolific study, researchers are permitted the chance to “pre-screen” potential participants. A common approach is to limit recruitment to participants from certain countries and who have completed a minimum number of surveys already (e.g., 100) with a minimum approval rating. Perhaps the researchers used such screenings but failed to report them? If no prescreening was employed, a strong argument should be made as to why they felt these were unnecessary. From personal experience, data are substantially improved (along with attrition) when recruiting participants with established experience in the platform.

2. In addition to minimum qualifications, it is common practice to use some form of quality control checking to ensure data are indeed valid. One tactic we have used in our research is to code qualitative data, of which the researchers have several potential questions. In my casual review, there are a sizable number of participants who did not answer any qualitative questions, which raises some skepticism as to the quality of their responses. I would advise the researchers to provide some form of systematic screening of the qualitative responses to rule out the potential for any inattentive data. It appears the researchers informed participants to “Leave the question blank if you prefer not to answer,” which complicates such an analysis. Regardless, I advise some screening process of some kind.

3. Related to point #3 above, there seems to be many missed opportunities for additional analyses. For example, I ran a quick qualitative analysis on the data and found an interesting pattern in responses to the question: “What do you think was the overall purpose of the study you just completed?” On the average, high pay participants responded with about 30 total characters, while low pay participants responded with about 20 total characters. However, this was not statistically significant. I also found that high pay participants were less likely to put an “I don’t know” response—but again, this isn’t statistically significant.

4. The question “What gender/sex do you identify with?” contains only 3 different responses: Male, Female, or Other. Restricting participants to essentially a binary choice does not seem like an inclusive survey, which could impact participant responses to other questions, or perhaps even their attrition.

5. Regarding attrition, it seems possible that some participants may not have completed the study because it required installing the Inquisit plug-in. It is thereby possible that some variance could be explained by this artifact. To better isolate contributing variables, future studies might consider keeping all tasks within one survey and avoiding the use of plug-in installs.

6. Considering this study included pay amount as a primary IV, it is curious that the researchers did not include an income question in their demographics.

7. It appears that Prolific IDs may be present in the spreadsheet. I advise against sharing this information.

In sum, this was an interesting research question, and I am glad to see empirical research in this domain. However, the rigor of the crowdsourcing methods is not on par with the bulk of crowdsourcing studies I read—which is problematic given this is a study to potentially inform standards of crowdsourcing methods. The missed opportunities for additional analyses—whether due to incomplete demographic details or qualitative responses left unexplored—renders the analyses less rigorous than what is expected for publication in PLOS ONE. I hope the researchers will continue this work and in doing so make the improvements suggested above. I recommend Major Revision given that I am unsure whether the researchers have the data or information available to address my concerns. If so, a revision may be acceptable. If these data do not exist, I do not recommend publication.

Reviewer #3: The authors’ investigated the effects of pay rate and additional instructions on study attrition on the Prolific crowdsourcing platform via a two part study. In study 1, the investigators recruited participants into four groups (low pay/minimal instruction; low pay/added instruction; high pay/minimal instruction; high pay/added instruction) and engaged participants in a number of psychological tasks. Study 2 consisted of questionnaires regarding demographics, experimental task performance, and Autism Spectrum. Participants were reminded periodically over a 3 week period to complete study 2. The investigators reported that higher pay was associated with lower attrition, regardless of instructional category.

I commend the authors for the simple experimental question and design and statistical methods/results that support the conclusions. I think this manuscript could make a nice contribution to the burgeoning field of online crowdsourced psychological research. Overall I found the manuscript to be well written and can recommend publication in the current form.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Oct 4;18(10):e0292372. doi: 10.1371/journal.pone.0292372.r002

Author response to Decision Letter 0


6 Aug 2023

The Abstract is clear but could benefit from the inclusion of the actual sample size employed in the study.

We added this information to the abstract.

The first sentence of the manuscript is unclear as to whether the authors are asserting that half of all participants for research in psychology and social sciences are recruited through crowdsourcing, or if half of all research studies in psychology and social sciences use crowdsourcing to recruit participants.

The latter is correct. We have changed the first sentence to improve clarity.

Related to this, is there any time frame as to the assertion underlying this claim given that crowdsourcing is relatively recent compared to studies undertaken in psychology and social sciences since the turn of the last century? Clearly there is a trend to use crowdsourcing but when did this generally commence?

This began a little over a decade ago, we added this information to the first sentence (e.g., http://journal.sjdm.org/10/10630a/jdm10630a.pdf)

On page 9 and elsewhere, it is claimed that the findings suggest that performance accuracy increases in simple reversal learning tasks with increased pay rates, and that data quality was assessed. There is a need to include more information regarding the definition and criteria used by the authors to assess both accuracy and data quality. Was it simply based on obtained scores on these tasks, and could other factors be excluded from influencing such performance (e.g., subsample characteristics given we do not have data on age and sex distribution)?

We added definitions for both. Regarding the last point, we mention on page 12 that one limitation is that demographic variables (e.g., age, sex) were not controlled for or assessed when examining performance accuracy.

On page 11, ‘…hourly pay with ranging between…” should read ‘…hourly pay with ranging between…”.

We changed this to “hourly pay ranging between…”

The authors do not provide any information regarding the qualifications of the Prolific participants. When constructing a Prolific study, researchers are permitted the chance to “pre-screen” potential participants. A common approach is to limit recruitment to participants from certain countries and who have completed a minimum number of surveys already (e.g., 100) with a minimum approval rating. Perhaps the researchers used such screenings but failed to report them? If no prescreening was employed, a strong argument should be made as to why they felt these were unnecessary. From personal experience, data are substantially improved (along with attrition) when recruiting participants with established experience in the platform.

We did not use prescreening because research suggests that, unlike MTurk, Prolific generally provides high-quality data without prescreening (Peer et al., 2021). We now include this information in the Methods section.

In addition to minimum qualifications, it is common practice to use some form of quality control checking to ensure data are indeed valid. One tactic we have used in our research is to code qualitative data, of which the researchers have several potential questions. In my casual review, there are a sizable number of participants who did not answer any qualitative questions, which raises some skepticism as to the quality of their responses. I would advise the researchers to provide some form of systematic screening of the qualitative responses to rule out the potential for any inattentive data. It appears the researchers informed participants to “Leave the question blank if you prefer not to answer,” which complicates such an analysis. Regardless, I advise some screening process of some kind.

We did thoroughly screen qualitative data – for example, we looked for nonsensical response patterns or responses that were not in English. However, we did not exclude any data sets for this reason. We added a note about this process to supplemental survey data. Unlike MTurk, Prolific takes a number of steps to eliminate bots (see https://www.prolific.co/blog/bots-and-data-quality-on-crowdsourcing-platforms).

Related to point #3 above, there seems to be many missed opportunities for additional analyses. For example, I ran a quick qualitative analysis on the data and found an interesting pattern in responses to the question: “What do you think was the overall purpose of the study you just completed?” On the average, high pay participants responded with about 30 total characters, while low pay participants responded with about 20 total characters. However, this was not statistically significant. I also found that high pay participants were less likely to put an “I don’t know” response—but again, this isn’t statistically significant.

We appreciate this point and agree that there are many possible measures of data quality. We added a footnote with the suggestion to use number of characters on specific survey questions as a measure of data quality – see page 10.

The question “What gender/sex do you identify with?” contains only 3 different responses: Male, Female, or Other. Restricting participants to essentially a binary choice does not seem like an inclusive survey, which could impact participant responses to other questions, or perhaps even their attrition.

Thank you for this important suggestion. We agree and will modify this questionnaire for future research.

Regarding attrition, it seems possible that some participants may not have completed the study because it required installing the Inquisit plug-in. It is thereby possible that some variance could be explained by this artifact. To better isolate contributing variables, future studies might consider keeping all tasks within one survey and avoiding the use of plug-in installs.

We also appreciate this point and have added this as a limitation in the Discussion.

Considering this study included pay amount as a primary IV, it is curious that the researchers did not include an income question in their demographics.

We suggest that future researchers should evaluate how demographic variables influence the relation between pay rate and performance accuracy on Page 12, and we now mention income as another demographic variable that could be evaluated in that section.

It appears that Prolific IDs may be present in the spreadsheet. I advise against sharing this information.

These have been removed.

Attachment

Submitted filename: Response to Reviewers_7.3.23.docx

Decision Letter 1

Jacinto Estima

20 Sep 2023

Effects of Pay Rate and Instructions on Attrition in Crowdsourcing Research

PONE-D-23-00174R1

Dear Dr. Ritchey,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jacinto Estima

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you very much for addressing all the comments provided by the reviewers.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for attending to the original comments. I have no further comments to offer. All issues have been addressed to satisfaction.

Reviewer #3: Given my prior recommendation for publication, as well as the authors' responsiveness to other reviewer comments, I am pleased to recommend this manuscript again for publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #3: No

**********

Acceptance letter

Jacinto Estima

26 Sep 2023

PONE-D-23-00174R1

Effects of Pay Rate and Instructions on Attrition in Crowdsourcing Research

Dear Dr. Ritchey:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Jacinto Estima

Academic Editor

PLOS ONE


Articles from PLOS ONE are provided here courtesy of PLOS

RESOURCES