Skip to main content
PLOS One logoLink to PLOS One
. 2025 Sep 8;20(9):e0331498. doi: 10.1371/journal.pone.0331498

Earmarking donations to boost study participation? Evidence from a field experiment

Andreas Raff 1,*, Robert Böhm 2,3,4, Christoph Fuchs 1
Editor: Bernhard Reinsberg,5
PMCID: PMC12416732  PMID: 40920689

Abstract

Charitable donations are often the most suitable available way to incentivize study participation, yet their optimal design remains unclear. In a preregistered field experiment, we invited 6,711 psychology faculty at top-200 universities to complete a survey in exchange for a US $5 donation to test whether allowing prospective participants to earmark the donation for a specific purpose increases study participation. Contrary to preregistered hypotheses derived from previous literature, the results showed no significant increase in study participation rates when participants could earmark their donation compared to a random allocation of funds. These findings suggest that while earmarking has been shown to enhance overall donation rates, its effectiveness may not extend to incentivizing study participation.

Introduction

Increasing study participation rates is crucial for conducting robust and reliable academic research. Researchers have long experimented with techniques to raise response rates. The most prominent levers are advance notification, repeated contact attempts, personalization, and the use of incentives [13]. Incentives are usually either personal—monetary payments or gifts—or donation-based, whereby the researcher pledges to contribute to charity on the respondent’s behalf. While a substantial literature examines how the timing (pre-paid vs post-paid), size, and framing (cash, voucher, lottery) of personal incentives shape study participation [2,46], donation incentives have received far less attention. Most existing work has focused narrowly on whether such incentives outperform or underperform personal incentives in attracting respondents for surveys [510], with little systematic investigation into how design features of donation incentives perform against each other (one exception is [11]).

Yet understanding how to implement donation incentives optimally to increase response rates is critical. Even if donation incentives are often less effective than personal incentives [5,6] they can still be the best option when budgetary, ethical or participant-specific constraints rule out personal incentives. For example, some professions—such as public officials, military personnel or university employees—may not be able to accept personal rewards [1214]. In studies on sensitive topics, participants may hesitate to provide the personal data required for financial transfers. Most importantly, affluent or time-poor individuals may face opportunity costs that no realistic cash payment within the study budget can offset [15,16]. Under such circumstances, even modest personal incentives are unlikely to attract these participants because they lack strong cognitive valuation [17]. Donation incentives, by contrast, can carry an additional affective valuation stemming from the psychological benefits of helping others that makes even a small donation on the participant’s behalf more compelling than an equivalently small personal payment [17]. Thus, there are clear scenarios in which personal incentives are not suitable, and donation incentives provide a viable alternative. While this alone warrants research into their optimal design, from a welfare perspective, optimizing donation incentives is valuable in its own right since, they potentially boost response rates while simultaneously directing funds to prosocial causes, creating a dual social benefit.

One promising design variation of a donation incentive that, to our knowledge, has neither been implemented in the context of incentivizing study participation nor studied for its effectiveness in this setting is earmarking—allowing participants to choose the specific cause their donation supports. The charitable-giving literature demonstrates that offering individuals the option to select a specific cause for their donation (e.g., malaria vaccinations in Africa or building schools in Nepal) reliably increases donation willingness [1820].

In this paper, we investigate whether the earmarking effect generalizes from donation willingness to the context of study participation. Specifically, we test the impact of earmarking within donation-based incentives by comparing a conventional, non-earmarked donation incentive to two forms of earmarking. First, we implement a standard earmarking condition in which participants can select a specific cause to directly receive their donation. Second, recognizing that charities value both increased contributions and flexibility in fund allocation, we further introduce a novel earmarking condition that we coin “Earmarking with Flexibility.” In this condition, participants still select their preferred cause, but are informed that the charity may reallocate funds if the chosen cause’s funding target has already been met. This flexible approach addresses operational inefficiencies associated with standard earmarking, which often leads to imbalances with some causes becoming overfunded and others underfunded [21,22]. Our flexible earmarking design thus offers potential for a strong Pareto improvement for researchers and charity organizations—boosting participation rates while preserving charities’ ability to allocate resources optimally.

In sum, this study assesses whether granting participants control over donation allocation—through either standard or flexible earmarking—can improve the effectiveness of a donation incentive. Our findings contribute to the literature on study incentivization in general—and donation incentives in particular—by being the first to experimentally test earmarking in this context. They provide new insights into both the potential and the limitations of earmarking for enhancing participant engagement with donation incentives. Moreover, we contribute to the broader earmarking literature by examining the robustness and generalizability of the effect beyond charitable donations—an essential step in evaluating its external validity [23]. Finally, by developing and testing a novel, flexible earmarking format, we broaden the scope of research on earmarking to consider designs that accommodate the operational constraints faced by charitable organizations.

Study participation

Incentivizing study participation can be understood through the lens of the Leverage–Saliency Theory of Survey Participation [24], which conceptualizes the decision to participate as a cognitive balancing act. Individuals weigh the perceived costs and benefits of participation on a mental scale. Various survey characteristics—such as the study topic, the perceived burden of participation, trust in the researchers, and incentives—exert differential leverage depending on how much influence they have on the decision to participate. Their salience, or psychological prominence at the moment of decision-making, further determines their effectiveness in tipping the scale. Participation becomes likely when salient factors with negative leverage, like time demands or privacy concerns, are outweighed by those with positive leverage, such as the topic of the study or benefits promised via study incentives. Crucially, these benefits need not be monetary. According to the model of impure altruism [25], individuals can experience intrinsic rewards from helping others—for example, through a charitable donation. If study participation triggers such a donation, then increasing its psychological benefits should, in turn, enhance individuals’ willingness to participate.

There is substantial literature which has identified a range of design strategies that boost the psychological benefits of donating—and thereby increase the willingness to help. These strategies include, for example, using emotional language, matching donations, storytelling, providing an identifiable victim, or employing earmarking [1820,2630]. Yet, these insights have not been applied to the design of donation incentives. This paper takes a first step by focusing on earmarking—a simple, consistently effective way for increasing donations—to spearhead research into optimizing donation incentives.

Earmarking

The positive effects of earmarking can be readily explained by Self-Determination Theory (SDT) and its sub theory, Cognitive Evaluation Theory (CET) [31,32]. CET states that intrinsic motivation flourishes when two basic psychological needs—autonomy and competence—are satisfied [31,32]. Contexts that meet these needs promote intrinsic motivation and those that thwart them undermine it. Earmarking arguably meets both needs at once.

Granting meaningful choice is the most direct route to supporting autonomy [33]. A meta‑analysis by Patall et al. [33] shows that even letting people decide which reward they will receive for an action—labelled choice‑of‑reward—reliably boosts intrinsic motivation. Earmarking in a donation incentive fits this category: participants choose the charitable project that will receive the donation, thereby exercising genuine choice over their reward for study participation and thereby satisfying their autonomy need.

Earmarking also addresses the need for competence or felt effectance of one’s action [34]. When donors steer funds to a specific, clearly defined project, a decision usually made by the charity shifts to them [19]. This renders their contributions’ outcome visible and fosters the subjective experience of making a real difference—what the literature calls perceived impact. Empirical studies show that perceived impact boosts donation intentions [27,35,36] and mediates the positive effect of earmarking on giving [18]. Beyond satisfying competence needs, the positive effect of perceived impact on donation willingness enabled by earmarking may partly derive from the sense of personal causation—the feeling that “I, personally, made this happen”—a mechanism highlighted in impact philanthropy [37] which suggests that donors derive value not only from what was achieved but also from the self-attributed role in making it happen.

A collateral benefit of earmarking is increased transparency because it clarifies how donations are used. This financial transparency has been shown to increase trust in organizations [38], and greater trust, in turn, is positively associated with willingness to give [39].

Taken together, these psychological mechanisms help explain why earmarking increases donations [1820]. While, in theory, earmarking could raise either the amount donated or the likelihood of donating, empirical evidence suggests that its positive impact on donations stems primarily from the latter. Specifically, only Esterzon et al. observed a positive effect of earmarking on donation amount, whereas Fuchs et al. and Özer et al. did not [18,20]. In contrast, all three studies consistently reported that earmarking positively affects the likelihood of donating, indicating that offering earmarking attracts more donors. Thus, when extending these findings to participation incentives, earmarking may similarly broaden the pool of potential respondents and increase study participation rates.

However, it is important to recognize a potentially fundamental distinction between donations paid with one’s own money and externally funded donations that may attenuate the effectiveness of donation incentives in general—and of earmarking in particular: participants’ psychological ownership of the funds that are donated [40,41]. In a traditional giving framework, donors spend their own resources. A donation is typically conceptualized as reducing personal wealth, i.e., entailing a financial sacrifice that generates the associated emotional rewards [25]. By contrast, incentive-based donations are supplied externally. Because these funds never genuinely belong to participants, they experience no financial sacrifice and therefore may feel less emotionally invested [42]. Consequently, incentivized donations may yield weaker emotional rewards than self-funded donations in general.

Because perceived control breeds psychological ownership [40], giving respondents a say in where the incentive money goes—through earmarking—may partly offset the ownership deficit built into donation incentives. This control may cue a fleeting sense among participants that the money is, in some meaningful way, “theirs.” Critically, while earmarking should still enhance perceived impact (satisfying competence needs), reduced psychological ownership may weaken the felt personal impact among participants—specifically, the sense that “I personally caused this change” [37]. Consequently, where donors value personal causation, some emotional rewards may be lost.

Moreover, while the transparency inherent in earmarking may typically foster trust, the motivational impact of trust is likely to be attenuated in this context. Trust only becomes relevant when some degree of risk is present [43]. Consistent with this, empirical research shows that the influence of trust on decision-making tends to be moderated by perceived risk [44,45], with trust becoming more influential as perceived risk increases. In the case of donation incentives, however, risk is minimal: the monetary stakes are typically small, and participants do not contribute their own funds. Consequently, the trust-enhancing aspect of earmarking via transparency may do little to increase motivation in this low-risk setting.

As a result, reduced psychological ownership may diminish both emotional involvement and felt personal causation, potentially limiting earmarking’s effectiveness. Combined with trust’s attenuated role in low-risk contexts, earmarking offered in a donation incentive may not replicate the motivational pull observed in self-funded donations where donors spend their own money. Still, earmarking has consistently increased donations across diverse studies and settings. Because it satisfies two basic psychological needs identified by CET—autonomy and competence [31]—its motivational benefits should, in principle, generalize to a range of behavioral outcomes beyond charitable giving such as study participation.

Methods

To test our hypothesis, we conducted a large-scale field experiment targeting a time-constrained population with high opportunity costs—academic scholars—for whom, as previously outlined, a donation incentive is particularly appropriate. The study used real incentives that were paid out to a designated charity organization. Our experiment employed a one-factorial between-participants design with three conditions: Random condition, Earmarking condition, Earmarking with Flexibility condition (see below). This study was preregistered at https://aspredicted.org/5zj6r.pdf.

Ethics statement

The study was voluntary, involved minimal risk, and did not collect any personally identifiable information. Given the nature of this research, and in line with institutional guidelines set by the Departmental Review Board of the Department of Occupational, Economic, and Social Psychology at the University of Vienna—where ethical approval is generally voluntary—we did not seek formal ethics approval. Written informed consent was obtained from all participants prior to participation.

Study procedure and experimental conditions

Invitation emails were sent out on January 15, 2024, followed by a reminder on January 24, 2024. The recruitment period ended on February 13, 2024. In both emails, eligible participants were invited to take part in a study on hiring preferences in the field of psychology. The invitation emails began by introducing the study procedure and informing recipients about the estimated time for completion (10 minutes). Further, we notified all email recipients that if they decided to participate, we would donate $5 on their behalf to the Society for the Improvement of Psychological Science (SIPS; https://improvingpsych.org/).

Next, we introduced our manipulations to the participants. We informed all of them that the $5 donation would either be randomly allocated to one of three purposes currently supported by SIPS (Random condition) or that they could choose one of the three purposes (Earmarking and Earmarking with Flexibility conditions). The supported purposes included: (i) supporting preprints in psychology (PsyArXiv), (ii) a diversity travel fund, and (iii) a student/postdoc travel fund. The two earmarking conditions differed only in one additional sentence added to the Earmarking with Flexibility condition, which informed participants that SIPS could change the distribution of donations among purposes if the donation goal for a specific purpose was reached.

After clicking on the survey link, but prior to participation, individuals were informed that no risks were expected from the study, that their data would be completely anonymized, and that any findings would only be reported in aggregate form, accessible exclusively to the research team. They were also advised that they could withdraw from the study at any time without penalty. By clicking “I Agree,” participants confirmed that they had read the consent form and consented to take part in the study.

After completing the study, participants were asked to provide information on the focus of their scientific work (qualitative, quantitative, or both), their academic position (professor, associate professor, assistant professor, or other), as well as their age and gender.

Participants

We invited 6,711 academics employed at psychology departments at universities ranked within the top 200 universities globally to participate in our study. The invitations targeted 171 universities from the top 200 universities (based on Times Higher Education Ranking from 2023) that offer psychology programs. The contact details for the 6,711 individuals were manually collected between May 2023 and January 2024 by searching the psychology departments of the respective universities using the Google search engine and copy-pasting the relevant information on academic faculty members. We randomly assigned n = 2,237 participants to each experimental condition. Of all the contacted eligible participants, 406 completed the study (Mage = 46.82, SD = 10.6); 50.2% women; 44.3% Full Professors, 22.9% Associate Professors, 23.4% Assistant Professors, 9.4% other).

Dependent variables

We calculated the ratios of our two main dependent variables for each experimental condition: survey begun and completion. First, survey begun is defined as the number of people who agreed to participate divided by the number of people in the experimental condition (i.e., those who received the invitation email). This measure reflects the number of participants who started the survey by providing informed consent on the first survey page, regardless of whether they subsequently completed the study. Second, completion is defined as the number of people who completed the survey divided by the number of people in the experimental condition. Additionally, for exploratory purposes, we calculated a third dependent variable representing the ratio of people who merely clicked on the survey (derived by counting the unique respondent IDs in the dataset) per experimental condition in the following denoted by clicked on survey.

Analysis strategy

As preregistered, we combined both earmarking conditions for our main analysis, henceforth referred to as Earmarking Combined conditions. Specifically, we tested for differences in the rates of our dependent variables between the Earmarking Combined conditions and the Random condition using a one-tailed two-proportion z-test, expecting higher participation rates in the Earmarking Combined conditions than in the Random condition. Similarly, when comparing the earmarking conditions separately with the Random condition, we performed one-tailed two-proportion z-tests. As preregistered, since we had no preconceived notions about potential differences between the two earmarking conditions, we conducted two-tailed tests to look for possible differences in either direction. Moreover, for all exploratory analyses, we employed two-tailed tests, as well.

Before conducting the study, we ran a sensitivity analysis to assess the appropriateness of our sample size for examining the hypothesis under consideration. To set a plausible baseline participation rate for our sensitivity analysis, we reviewed studies involving the same population—academic psychologists—who were asked to complete studies on a similar topic. Two studies stood out as particularly relevant: Donnelly et al. [46] and Anderson et al. [47]. Donnelly et al. [46] do not report how many individuals were contacted, so no participation rate can be determined. In contrast, Anderson et al. [47] report a 9% participation rate among psychology and management faculty worldwide. Based on this figure—and our own informal experience with similar, time-pressured academic samples—we used 10% as a rounded, illustrative baseline. Using a one-tailed two-proportion z-test with an alpha level of 0.05 and a power of 80%, the sensitivity analysis indicated that our sample size is sufficient to detect an effect where the participation rate in the Earmarking Combined conditions increases to 12% (relative to an assumed participation rate of 10% in the Control condition).

Furthermore, null results in this paper are accompanied by equivalence tests [48], evaluating the hypothesis that the effect size is greater than or at least as great as our smallest effect of interest, Δ = 2%. To identify a plausible minimum effect size that would justify the additional administrative effort of implementing earmarking, we reviewed meta-analyses on financial incentives in web- and electronic-based surveys, which reveal that such incentives can increase the odds of participation by 1.39 to 2.43 [4951]. Assuming a baseline participation rate of 10%, these odds ratios correspond to increases in participation of approximately 4–12 percentage points demonstrating that participation can shift substantially in response to incentive structures. Based on this, we set a 2 percentage-point increase as the minimum effect size that would render earmarking practically worthwhile in our context. From the researcher’s perspective, an effect of this magnitude could potentially justify the additional organizational effort required to implement earmarked donations, including establishing a collaboration with a charity, ensuring proper fund allocation, and managing follow-up communication. From the charity’s perspective, the expected increase in donations could potentially outweigh the operational inefficiencies and reduced flexibility associated with earmarking. If the equivalence test is statistically significant (peq < 0.05), we conclude that the effect is practically equivalent, indicating that the data are most compatible with no significant effect, since it is likely smaller than Δ.

Results

Preregistered analyses

First, as preregistered, we examined whether the treatment groups differed in terms of starting the study by providing consent. The results revealed no significant difference between the 7.2% (162 out of 2,237) of the participants in the Random condition and the 7.5% (336 out of 4,474) of the participants in the Earmarking Combined conditions who gave their consent and began to participate in the study (z = 0.40, p = .345; peq = .005). Attrition among those who began the survey was low and balanced across conditions—17.3% (28/162) in Random, 18.2% (32/176) in Earmarking, and 20.0% (32/160) in Earmarking with Flexibility (all exploratory between-condition comparisons: two-tailed p > .50).

Next, we analyzed whether participants in the different experimental conditions differed in terms of completing the survey. The results revealed that 6.0% (134 out of 2,237) of participants in the Random condition and 6.1% (272 out of 4,474) of participants in the Earmarking Combined conditions completed the study (z = 0.14, p = .444, peq = .001). Moreover, there was no indication that the results differed between the first and the second wave of invitation e-mails (see S1 Table Robustness checks).

As further preregistered, we investigated whether there are differences between the three experimental conditions without conflating the two earmarking groups. As displayed in Table 1, and further underscored by the visual representation in Fig 1, there were no significant differences in any of the outcome measures across all the possible treatment comparisons.

Table 1. Results of significance and equivalence tests.

Comparison Dependent Variables Differences
Preregistered Analyses
Earmarking Combined vs. Random Survey begun 7.5% vs. 7.2%
(p = .345; peq = .005)
Earmarking Combined vs. Random Completion 6.1% vs. 6%
(p = .444, peq = .001)
Earmarking vs. Random Survey begun 7.9% vs. 7.2%
(p = .215, peq = .041)
Earmarking vs. Random Completion 6.4% vs. 6%
(p = .268, peq = .016)
Earmarking with Flexibility vs. Random Survey begun 7.2% vs. 7.2%
(p = .452, peq = .007)
Earmarking with Flexibility vs. Random Completion 5.7% vs. 6%
(p = .352, peq = .007)
Earmarking vs. Earmarking with Flexibility Survey begun 7.9% vs. 7.2%
(p = .363, peq = .052)
Earmarking vs. Earmarking with Flexibility Completion 6.4% vs. 5.7%
(p = .318, peq = .036)
Exploratory Analysis
Earmarking Combined vs. Random Clicked on survey 15.0% vs. 13.5%
(p = .095, peq = .297)

Fig 1. Results of the three outcome measures.

Fig 1

Proportion of participants who clicked on survey, started the survey, and completed the experiment across experimental conditions, with 95% confidence intervals.

Exploratory analyses

Regarding the secondary outcome measure of whether participants clicked on the survey link at all, the results revealed that 13.5% (302 out of 2,237) of the participants in the Random condition and 15% (672 out of 4,474) in the Earmarking Combined conditions (Earmarking: 15.7%; Earmarking with Flexibility: 14.3%) clicked on the survey, which is not a significant difference using a two-tailed test (z = 1.6, p = .095, peq = .297).

As shown in S1 Table in the online supplement, participant characteristics across experimental conditions were largely equal regarding, age, whether the participant works mainly quantitatively or qualitatively, ranking of the university, position held, and commitment to open science practices (COSP). Thus, we found no indication that participant characteristics moderate the treatment effects.

Discussion

Our results suggest that earmarking did not increase study participation. This lack of evidence of a meaningful earmarking effect was irrespective of the type of earmarking condition and the outcome measure.

Implications

While future research is needed to better understand and contextualize the absence of effects observed in this study, practical implications can already be drawn from our findings. Specifically, our research shows that earmarking is not an effective way to increase study participation, but its use does not harm participation either. Therefore, earmarking could still be a viable option in contexts where donation-based incentives are deemed appropriate. However, when thinking about implementing earmarking, researchers should weigh its potential benefits against its downsides. As noted earlier, earmarking can reduce organizational flexibility of the recipient charities [21,22], and can impair funding efficiency, project outcomes, and overall impact [5254]. Moreover, earmarking creates additional administrative burdens for researchers, including coordinating with charities, ensuring correct fund allocation, and managing follow-up communications. Given these costs and the null effects we observed, earmarking should only be integrated into donation incentives when its potential advantages clearly outweigh these downsides.

Limitations

Despite the strengths of our large-scale field experiment, several limitations emerge from our design—each of which can be meaningfully linked to Leverage–Saliency Theory of Survey Participation.

First, the modest donation amount of $5 may have limited the incentive’s perceived value (i.e., leverage), thereby weakening its ability to motivate participation. Although significant earmarking effects have been observed even with very small sums (as low as $1 [20]), and the emotional rewards of giving are detectable at amounts as modest as $5 (e.g., [55]), future research should explore whether larger donation incentive amounts combined with earmarking enhance leverage and increase participation rates. This may be particularly relevant because, as theorized, psychological ownership of the funds may be lower for donation incentives than for donations paid with one’s own money, which may suggest that higher amounts are needed for the positive effects of earmarking to manifest in this context.

Second, our sample—academic scholars—represents a highly distinct population. This group regularly designs and engages with studies and experiments, which might lead to different reactions to experimental stimuli. Extensive experience with experimental paradigms has been demonstrated to potentially attenuate the impact of experimental manipulations [56,57]. According to the Leverage–Saliency Theory of Survey Participation framework, experience may therefore reduce the salience of the incentive and its design variations, thereby diminishing their potential to influence participation. Future studies should examine whether less experienced or more diverse populations respond differently, providing more generalizable insights into earmarking effects.

Third, we did not directly assess participants’ trust in SIPS or the importance they attached to its mission. While we believe many participants likely had strong confidence in SIPS’s stewardship—given its sustained engagement in the academic-psychology community, commitment to support and promote open and transparent research practices, and annual conferences that draw 500–1,000 attendees—individual perceptions of trust may still meaningfully vary. Likewise, although we chose SIPS to maximize mission importance for participants by aligning the cause with participants’ professional identity and values, factors known to increase donation willingness [58,59] we cannot be sure that every participant regarded the mission of the organization as important. Within the Leverage–Saliency Theory of Survey Participation framework, trust in the charity organization and perceived mission importance are potentially important for a donation incentive’s leverage: only when the charity is trusted and its mission deemed important variations in the donation incentive (e.g., earmarked vs. non-earmarked) might meaningfully influence participation. If trust or mission importance is low, the incentive’s leverage is likely reduced and if trust and mission importance vary widely across individuals, the effect of manipulating donation‐incentive design features may be diluted, making true differences harder to detect. Future work should therefore measure trust and mission importance directly to clarify how these attributes potentially moderate earmarking effects.

Fourth, related to the previous limitation, the different purposes to choose from for donations on SIPS (i.e., a preprinting platform and two types of travel funds) are arguably closely related and participants may have felt smaller differences between these purposes than, for instance, different projects supported by the Red Cross, due to its much broader mission and more diverse projects. This limited thematic differentiation may have weakened the perceived meaningfulness of the choice, potentially reducing the leverage of the earmarking manipulation. Because our design therefore minimizes cause differentiation, the earmarking effect we (fail to) detect can be interpreted as a conservative lower‑bound estimate. Future research should explicitly vary cause differentiation and test whether thematic distance between causes moderates the earmarking effect.

Finally, a core strength of earmarking lies in its ease of implementation—it can be conveyed in just a few words. However, this simplicity can also be a weakness: the manipulation may be too subtle, leading participants to overlook or fail to register it entirely. Although we attempted to enhance visibility (i.e., saliency) by formatting the manipulation in bold, it remains possible that some participants did not attend to it. Future studies should therefore include manipulation checks to distinguish between those who noticed and understood the experimental manipulations and those who did not.

Considering these limitations, we cannot claim that earmarking is equally inefficient in other contexts and with other samples. We hope that our findings will inspire future research on the generalizability of earmarking effects to different contexts and its potential moderator variables, which may provide important theoretical insights on the very nature of earmarking effects, as well as when they can be expected to be effective or not.

Future research

A promising direction for future studies is to examine the role of psychological ownership in differentiating donations made with one’s own money from externally funded donation incentives, as theorized before, which could be one reason why earmarking in this particular instance did not prove effective. To investigate this, researchers first need to identify methods for experimentally varying psychological ownership of the incentive funds. The study by Stoffel et al. [60] provides a useful template for achieving this goal. In their research, survey participation increased when respondents had the option to choose between a personal incentive (a $2 Amazon voucher) and a “decoy” donation incentive, compared to a condition that offered only a personal incentive. By providing respondents with a choice to select between the personal and donation incentive, the authors effectively give the participants complete freedom over the funds and therefore maximize respondents’ psychological ownership. Future research could investigate this design in combination with earmarking. Although Stoffel et al. [60] found increased survey participation when a donation incentive (for a pre-specified charity organization) was available, ultimately 95% of respondents still chose the personal incentive. Integrating earmarking into this framework could potentially improve social welfare outcomes by not only enhancing participation rates but also encouraging more respondents to opt for donations over personal rewards.

Conclusion

In summary, our findings suggest that earmarking may not prove effective in motivating study participation. Generalization and extension studies like ours play a critical role in advancing scientific knowledge by highlighting the boundaries and limitations of previously established effects, particularly when extending them to new application domains. They may thus help prevent the overgeneralization of certain effects, ensuring that interventions and strategies are applied appropriately and effectively [61,62].

Supporting information

S1 Table. Characteristics of participants.

Participants who completed the study (n = 406) dependent to the experimental condition.

(DOCX)

pone.0331498.s001.docx (17.8KB, docx)

Acknowledgments

We gratefully acknowledge the cooperation with the Society for the Improvement of Psychological Science (SIPS) in the context of this project. The study generated a total donation of $2,030 to the organization.

Data Availability

The study materials, supplementary analyses, and data are publicly available from the Open Science Framework repository (https://osf.io/ewz6v/).

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Heerwegh D, Vanhove T, Matthijs K, Loosveldt G. The effect of personalization on response rates and data quality in web surveys. International Journal of Social Research Methodology. 2005;8(2):85–99. doi: 10.1080/1364557042000203107 [DOI] [Google Scholar]
  • 2.Singer E, Ye C. The Use and Effects of Incentives in Surveys. The ANNALS of the American Academy of Political and Social Science. 2012;645(1):112–41. doi: 10.1177/0002716212458082 [DOI] [Google Scholar]
  • 3.Sammut R, Griscti O, Norman IJ. Strategies to improve response rates to web surveys: A literature review. Int J Nurs Stud. 2021;123:104058. doi: 10.1016/j.ijnurstu.2021.104058 [DOI] [PubMed] [Google Scholar]
  • 4.Yu J, Cooper H. A Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research. 1983;20(1):36. doi: 10.2307/3151410 [DOI] [Google Scholar]
  • 5.Warriner K, Goyder J, Gjertsen H, Hohner P, McSpurren K. Charities, no: lotteries, no [cash, yes: main effects and interactions in a Canadian incentives experiment]. Public Opinion Quarterly. 1996;60:562. [Google Scholar]
  • 6.Deutskens E, de Ruyter K, Wetzels M, Oosterveld P. Response Rate and Response Quality of Internet-Based Surveys: An Experimental Study. Marketing Letters. 2004;15(1):21–36. doi: 10.1023/b:mark.0000021968.86465.00 [DOI] [Google Scholar]
  • 7.Robertson DH, Bellenger DN. A New Method of Increasing Mail Survey Responses: Contributions to Charity. Journal of Marketing Research. 1978;15(4):632. doi: 10.2307/3150635 [DOI] [Google Scholar]
  • 8.Hubbard R, Little EL. Promised Contributions to Charity and Mail Survey Responses: Replication With Extension. Public Opinion Quarterly. 1988;52(2):223. doi: 10.1086/269096 [DOI] [Google Scholar]
  • 9.Penn JM, Hu W. Payment versus charitable donations to attract agricultural and natural resource survey participation. J of Agr App Econ Assoc. 2023;2(3):461–80. doi: 10.1002/jaa2.72 [DOI] [Google Scholar]
  • 10.Conn K, Mo CH, Purohit B. Differential efficacy of survey incentives across contexts: experimental evidence from Australia, India, and the United States. PSRM. 2024;:1–10. doi: 10.1017/psrm.2024.53 [DOI] [Google Scholar]
  • 11.Gendall P, Healey B. Effect of a Promised Donation to Charity on Survey Response. International Journal of Market Research. 2010;52(5):565–77. doi: 10.2501/s147078531020148x [DOI] [Google Scholar]
  • 12.Standards of Ethical Conduct for Employees of the Executive Branch, 5 C.F.R. pt. 2635 (2025) [cited 2025 Jun 23]. Available from: https://www.ecfr.gov/current/title-5/chapter-XVI/subchapter-B/part-2635
  • 13.University of Pittsburgh Human Research Protection Office. Department of Defense. 2022 March 3 [cited 2025 Jun 23]. In: Policies & Procedures [Internet]. Pittsburgh (PA): University of Pittsburgh. Available from: https://www.hrpo.pitt.edu/policies-and-procedures/department-defense
  • 14.California State University, East Bay. Office of Research and Sponsored Programs. [date unknown] [cited 2025 Jun 23]. In: Office of Research and Sponsored Programs [Internet]. Hayward, CA: California State University, East Bay. Available from: https://www.csueastbay.edu/orsp/
  • 15.Philipson T. Data Markets and the Production of Surveys. The Review of Economic Studies. 1997;64(1):47. doi: 10.2307/2971740 [DOI] [Google Scholar]
  • 16.Moran K. Recruiting High-Income Participants: Challenges and Tips. 2022. May 1 [cited 2025 Jun 23]. In: Articles & Videos [Internet]. Dover, DE: Nielsen Norman Group. Available from: https://www.nngroup.com/articles/high-income-participants/ [Google Scholar]
  • 17.Khan U, Goldsmith K, Dhar R. When Does Altruism Trump Self-Interest? The Moderating Role of Affect in Extrinsic Incentives. Journal of the Association for Consumer Research. 2020;5(1):44–55. doi: 10.1086/706512 [DOI] [Google Scholar]
  • 18.Fuchs C, de Jong MG, Schreier M. Earmarking Donations to Charity: Cross-cultural Evidence on Its Appeal to Donors Across 25 Countries. Management Science. 2020;66(10):4820–42. doi: 10.1287/mnsc.2019.3397 [DOI] [Google Scholar]
  • 19.Esterzon E, Lemmens A, Van den Bergh B. Enhancing Donor Agency to Improve Charitable Giving: Strategies and Heterogeneity. Journal of Marketing. 2023;87(4):636–55. doi: 10.1177/00222429221148969 [DOI] [Google Scholar]
  • 20.Özer Ö, Urrea G, Villa S. To Earmark or to Nonearmark? The Role of Control, Transparency, and Warm-Glow. M&SOM. 2024;26(2):739–57. doi: 10.1287/msom.2022.0096 [DOI] [Google Scholar]
  • 21.Toyasaki F, Wakolbinger T. Impacts of earmarked private donations for disaster fundraising. Ann Oper Res. 2011;221(1):427–47. doi: 10.1007/s10479-011-1038-5 [DOI] [Google Scholar]
  • 22.Dube N, van Wassenhove L, van der Vaart T. Earmarked funding: Four reasons why we shouldn’t dictate where our charitable donations go. 2022 Apr 21 [cited 2025 Jan 29]. In: British Politics and Policy blog [Internet]. London: Blog LSE. Available from: https://blogs.lse.ac.uk/politicsandpolicy/earmarked-funding-four-reasons-why-we-shouldnt-dictate-where-our-charitable-donations-go/
  • 23.Lynch JG Jr, Bradlow ET, Huber JC, Lehmann DR. Reflections on the replication corner: In praise of conceptual replications. International Journal of Research in Marketing. 2015;32(4):333–42. doi: 10.1016/j.ijresmar.2015.09.006 [DOI] [Google Scholar]
  • 24.Groves RM, Singer E, Corning A. Leverage-saliency theory of survey participation: description and an illustration. Public Opin Q. 2000;64(3):299–308. doi: 10.1086/317990 [DOI] [PubMed] [Google Scholar]
  • 25.Andreoni J. Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow Giving. The Economic Journal. 1990;100(401):464. doi: 10.2307/2234133 [DOI] [Google Scholar]
  • 26.Merchant A, Ford JB, Sargeant A. Charitable organizations’ storytelling influence on donors’ emotions and intentions. Journal of Business Research. 2010;63(7):754–62. doi: 10.1016/j.jbusres.2009.05.013 [DOI] [Google Scholar]
  • 27.Cryder C, Loewenstein G. The critical link between tangibility and generosity. In: The science of giving: Experimental approaches to the study of charity. New York, NY, US: Psychology Press; 2011. p. 237–51 (Society for Judgment and Decision Making series). [Google Scholar]
  • 28.Anik L, Norton MI, Ariely D. Contingent Match Incentives Increase Donations. Journal of Marketing Research. 2014;51(6):790–801. doi: 10.1509/jmr.13.0432 [DOI] [Google Scholar]
  • 29.Paxton P, Velasco K, Ressler RW. Does Use of Emotion Increase Donations and Volunteers for Nonprofits? Am Sociol Rev. 2020;85(6):1051–83. doi: 10.1177/0003122420960104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Saeri AK, Slattery P, Lee J, Houlden T, Farr N, Gelber RL, et al. What Works to Increase Charitable Donations? A Meta-Review with Meta-Meta-Analysis. Voluntas. 2022;34(3):626–42. doi: 10.1007/s11266-022-00499-y [DOI] [Google Scholar]
  • 31.Deci EL, Ryan RM. Intrinsic motivation and self-determination in human behavior. New York, NY: [u.a.]: Plenum Press; 1985. (Perspectives in social psychology). Available from: URL: https://ubdata.univie.ac.at/AC02528144 [Google Scholar]
  • 32.Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. 2000;55(1):68–78. doi: 10.1037//0003-066x.55.1.68 [DOI] [PubMed] [Google Scholar]
  • 33.Patall EA, Cooper H, Robinson JC. The effects of choice on intrinsic motivation and related outcomes: a meta-analysis of research findings. Psychol Bull. 2008;134(2):270–300. doi: 10.1037/0033-2909.134.2.270 [DOI] [PubMed] [Google Scholar]
  • 34.WHITE RW. Motivation reconsidered: the concept of competence. Psychol Rev. 1959;66:297–333. doi: 10.1037/h0040934 [DOI] [PubMed] [Google Scholar]
  • 35.Touré-Tillery M, Fishbach A. Too far to help: The effect of perceived distance on the expected impact and likelihood of charitable action. J Pers Soc Psychol. 2017;112(6):860–76. doi: 10.1037/pspi0000089 [DOI] [PubMed] [Google Scholar]
  • 36.Aknin LB, Dunn EW, Whillans AV, Grant AM, Norton MI. Making a difference matters: Impact unlocks the emotional benefits of prosocial spending. Journal of Economic Behavior & Organization. 2013;88:90–5. doi: 10.1016/j.jebo.2013.01.008 [DOI] [Google Scholar]
  • 37.Duncan B. A theory of impact philanthropy. Journal of Public Economics. 2004;88(9–10):2159–80. doi: 10.1016/s0047-2727(03)00037-9 [DOI] [Google Scholar]
  • 38.Ghoorah U, Mariyani-Squire E, Zoha Amin S. Relationships between financial transparency, trust, and performance: an examination of donors’ perceptions. Humanit Soc Sci Commun. 2025;12(1). doi: 10.1057/s41599-025-04640-2 [DOI] [Google Scholar]
  • 39.Harris EE, Neely D. Determinants and Consequences of Nonprofit Transparency. Journal of Accounting, Auditing & Finance. 2018;36(1):195–220. doi: 10.1177/0148558x18814134 [DOI] [Google Scholar]
  • 40.Pierce JL, Kostova T, Dirks KT. Toward a Theory of Psychological Ownership in Organizations. The Academy of Management Review. 2001;26(2):298. doi: 10.2307/259124 [DOI] [Google Scholar]
  • 41.Peck J, Luangrath AW. A review and future avenues for psychological ownership in consumer research. Consumer Psychology Review. 2022;6(1):52–74. doi: 10.1002/arcp.1084 [DOI] [Google Scholar]
  • 42.Ramaseshan B, Stein A, Rabbanee FK. Status demotion in hierarchical loyalty programs: effects of payment source. The Service Industries Journal. 2016;36(9–10):375–95. doi: 10.1080/02642069.2016.1219721 [DOI] [Google Scholar]
  • 43.Molm LD, Takahashi N, Peterson G. Risk and Trust in Social Exchange: An Experimental Test of a Classical Proposition. American Journal of Sociology. 2000;105(5):1396–427. doi: 10.1086/210434 [DOI] [Google Scholar]
  • 44.Paulssen M, Roulet R, Wilke S. Risk as moderator of the trust-loyalty relationship. European Journal of Marketing. 2014;48(5/6):964–81. doi: 10.1108/ejm-11-2011-0657 [DOI] [Google Scholar]
  • 45.Qalati SA, Vela EG, Li W, Dakhan SA, Hong Thuy TT, Merani SH. Effects of perceived service quality, website quality, and reputation on purchase intention: The mediating and moderating roles of trust and perceived risk in online shopping. Cogent Business & Management. 2021;8(1). doi: 10.1080/23311975.2020.1869363 [DOI] [Google Scholar]
  • 46.Donnelly K, McKenzie CRM, Müller-Trede J. Do publications in low-impact journals help or hurt a CV? J Exp Psychol Appl. 2019;25(4):744–52. doi: 10.1037/xap0000228 [DOI] [PubMed] [Google Scholar]
  • 47.Anderson CG, McQuaid RW, Wood AM. The effect of journal metrics on academic resume assessment. Studies in Higher Education. 2022;47(11):2310–22. doi: 10.1080/03075079.2022.2061446 [DOI] [Google Scholar]
  • 48.Lakens D. Equivalence tests: A practical primer for t tests, correlations, and meta-analyses. Social Psychological and Personality Science. 2017;8:362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.David MC, Ware RS. Meta-analysis of randomized controlled trials supports the use of incentives for inducing response to electronic health surveys. J Clin Epidemiol 2014; 67(11):1210–21. [DOI] [PubMed] [Google Scholar]
  • 50.van Gelder MMHJ, Vlenterie R, IntHout J, Engelen LJLPG, Vrieling A, van de Belt TH. Most response-inducing strategies do not increase participation in observational studies: a systematic review and meta-analysis. J Clin Epidemiol. 2018;99:1–13. doi: 10.1016/j.jclinepi.2018.02.019 [DOI] [PubMed] [Google Scholar]
  • 51.Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Woolf B, Perkins C. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2023;11(11):MR000008. doi: 10.1002/14651858.MR000008.pub5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Heinzel M, Cormier B, Reinsberg B. Earmarked Funding and the Control–Performance Trade-Off in International Development Organizations. Int Org. 2023;77(2):475–95. doi: 10.1017/s0020818323000085 [DOI] [Google Scholar]
  • 53.Heinzel M, Reinsberg B, Zaccaria G. Core funding and the performance of international organizations: Evidence from UNDP projects. Regulation Governance. 2024;19(3):957–76. doi: 10.1111/rego.12632 [DOI] [Google Scholar]
  • 54.Heinzel M, Reinsberg B, Siauwijaya C. Understanding Resourcing Trade-offs in International Organizations: Evidence from an Elite Survey Experiment. The Journal of Politics. 2025. doi: 10.1086/736339 [DOI] [Google Scholar]
  • 55.Dunn EW, Aknin LB, Norton MI. Spending money on others promotes happiness. Science. 2008;319(5870):1687–8. doi: 10.1126/science.1150952 [DOI] [PubMed] [Google Scholar]
  • 56.Chandler J, Paolacci G, Peer E, Mueller P, Ratliff KA. Using Nonnaive Participants Can Reduce Effect Sizes. Psychol Sci. 2015;26(7):1131–9. doi: 10.1177/0956797615585115 [DOI] [PubMed] [Google Scholar]
  • 57.Krefeld-Schwalb A, Sugerman ER, Johnson EJ. Exposing omitted moderators: Explaining why effect sizes differ in the social sciences. Proceedings of the National Academy of Sciences. 2024;121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Kesberg R, Keller J. Donating to the ‘right’ cause: Compatibility of personal values and mission statements of philanthropic organizations fosters prosocial behavior. Personality and Individual Differences. 2021;168:110313. doi: 10.1016/j.paid.2020.110313 [DOI] [Google Scholar]
  • 59.Chapman CM, Spence JL, Hornsey MJ, Dixon L. Social Identification and Charitable Giving: A Systematic Review and Meta-Analysis. Nonprofit and Voluntary Sector Quarterly. 2025. doi: 10.1177/08997640251317403 [DOI] [Google Scholar]
  • 60.Stoffel ST, Chaki B, Vlaev I. Testing a decoy donation incentive to improve online survey participation: Evidence from a field experiment. PLoS One. 2024;19(2):e0299711. doi: 10.1371/journal.pone.0299711 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Francis G. Too good to be true: publication bias in two prominent studies from experimental psychology. Psychon Bull Rev. 2012;19(2):151–6. doi: 10.3758/s13423-012-0227-9 [DOI] [PubMed] [Google Scholar]
  • 62.Franco A, Malhotra N, Simonovits G. Social science. Publication bias in the social sciences: unlocking the file drawer. Science. 2014;345(6203):1502–5. doi: 10.1126/science.1255484 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Bernhard Reinsberg

28 Mar 2025

PONE-D-25-07209Earmarking Donations to Boost Study Participation? Evidence from A Field ExperimentPLOS ONE

Dear Dr. Raff,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

As you can see, both reviewers find the paper interesting but also raise serious concerns about the framing and the study setup.  My own independent reading confirms their views.

As concerns framing, it is unclear how the outcome of "study participation" contributes to the literature on earmarking, specifically its effect on the likelihood of donation. My recommendation is to frame the paper differently, starting with the puzzle of why participation in research studies remains low, and whether the promise of (earmarked) donations can boost participation. This would strike me as a more effective framing strategy and require reviewing a slightly different literature. 

As concerns the study design, it is unclear 1) whether academics are a relevant sample from which we can generalize; 2) whether some outcomes used are meaningful (especially "consent" -- perhaps better labelled as "survey begun"), and 3) if the treatment worked (manipulation checks and power calculations missing). Please add relevant explanations. Moreover, it appears that the results need to be interpreted differently in that any donations for study participation, not just earmarked donations, fail to be effective. This seems like the more important finding but this baseline effect is not discussed. 

Please submit your revised manuscript by May 12 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Bernhard Reinsberg, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. You indicated that ethical approval was not necessary for your study. We understand that the framework for ethical oversight requirements for studies of this type may differ depending on the setting and we would appreciate some further clarification regarding your research. Could you please provide further details on why your study is exempt from the need for approval and confirmation from your institutional review board or research ethics committee (e.g., in the form of a letter or email correspondence) that ethics review was not necessary for this study? Please include a copy of the correspondence as an ""Other"" file.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

4. Please remove all personal information, ensure that the data shared are in accordance with participant consent, and re-upload a fully anonymized data set.

Note: spreadsheet columns with personal information must be removed and not hidden as all hidden columns will appear in the published file.

Additional guidance on preparing raw data for publication can be found in our Data Policy (https://journals.plos.org/plosone/s/data-availability#loc-human-research-participant-data-and-other-sensitive-data) and in the following article: http://www.bmj.com/content/340/bmj.c181.long.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This article looks at the question of whether the opportunity to earmark donations affects study participation among academics at psychology departments. The motivation for this question is based on the finding in the literature that earmarking donations often affects willingness to donate, and the authors are interested in whether this extends to study participation. The manuscript is clearly written, and the experiment is cleanly executed, which make for an interesting read. However, my main comments are to do with whether study participation is a relevant outcome for donations being earmarked versus not, and whether an academic population is necessarily informative for donation behaviour more broadly. The following points expand on this and summarize a few other comments that I hope will help the authors.

• First, it is not clear why the effect of earmarking donations on study participation is an interesting question because earmarking usually relates to donations to charities and international organizations, as the authors also note, whereas study participation of the sort we see in this manuscript is academic. Therefore, it is not clear why this is an interesting and relevant question and whether (academic) study participation is a relevant and useful outcome for studying the effects of earmarking donations.

• Somewhat related to the previous point, are academics generally a good population for studying the effects of earmarking donations? In other words, academics are not usually the main target audience for charitable donation so studying their receptiveness to earmarking versus not is not necessarily informative. I realize the charity in question is relevant to academics but the literature that the manuscript is situated in is broader and the examples of charities given are also very different from the kind of society/charity used in the experiment. The conclusions drawn seem to be very broad, which is why it becomes even more important to consider whether the type of sample used in the experiment is relevant for the kinds of charities the results are alluding to.

• The choice of the charity makes sense given the target audience, but it is unclear whether it is one that participants would care deeply about especially in comparison to charities that are usually studied in this literature. Did the authors perhaps ask questions to gauge whether participants trusted the Society to choose the right causes, spend the money fully et cetera? Similarly, was this a charity where participants likely cared enough about it in general to care about how their donation money would be spent?

• One of my main concerns when reading about the treatment conditions was whether the purposes were sufficiently different for participants to care enough to want to complete the study and be able to earmark. The authors briefly touch upon this in the conclusion, but this may be a more significant factor than presented because, in the real world, charities, especially international ones, are donating to very disparate causes with a lot of differentiation. For instance, when donating to something like UNICEF, you can often choose between different causes and different countries and, therefore, it is much more likely that those who are donating will care about which cause their donation goes to. In comparison, roughly the same set of recipients between three purposes that are all to do with helping academics within the field of psychology is unlikely to evoke the same reaction or interest in differentiating between the various causes. It may also, in general, not be a set of causes that participants have strong feelings about in any case, again especially in comparison with the types of recipients that large charities, whether national or international, help.

• In the analysis, it is unclear why "consent" is a dependent variable of interest. Participants who consented may or may not have reached the treatment page so it’s not obvious what information can be gleaned from this, and therefore it’s a bit odd that this is a main dependent variable in the analysis.

• The authors summarize their power calculation, but it would be helpful to know whether a change of 2 percentage points would be meaningful in this context, as it seems rather low. It would also be helpful to know what the basis is for the 10% assumed participation rate, unless that is simply an example in which case stating that would add clarity.

Reviewer #2: Introduction

The introduction advances the earmarking concept in charitable giving and provokes the primary question of the study: whether earmarking would boost participation in research studies with donation-based incentives. It defines the gap in generalizing donation behavior findings to study participation and introduces the experimental design.

• Sudden shift to the "dark side" of earmarking. The transition can be facilitated by a bridging sentence that logically links the good and bad sides of earmarking.

• Limited theoretical foundation connecting donation motivation and study participation. Draw on self-determination theory or prosocial behavior spillover literature to account for why earmarking effects would generalize to participation.

Literature Review

Literature review documents prior earmarking studies, psychological accounts (control, effect, transparency) and effect of earmarking on donation behavior.

Surface-level analysis of mechanisms (impact, control, transparency). Expand this section to distinguish between these mechanisms in depth and examine which might not be effective in a study participation context.

Failure to report charity-based rewards in survey research and theories of research participation motivation. Integrate literature on incentives of donation in research and motivational aspects in responding to surveys.

The critique lacks a strong concluding sentence that summarizes the gap and clearly conveys the need for this research.

Methodology

The study uses a large-scale field experiment with 6,711 academic researchers randomly assigned to three conditions: Random, Earmarking, and Earmarking with Flexibility. The main dependent measures are completion and consent rates.

• Uncertainty regarding randomization procedure (manual or computerized, stratified by variables like university?). Detail randomization procedure and include balance checks on age, gender, rank.

• No manipulation check for whether or not participants appreciated and saw the earmarking opportunity. Future experiments must include a post-task assessment of manipulation.

• Equivalence in donation targets erodes the salience of the choice. Future experiments must employ more dissimilar donation opportunities.

• No attrition analysis (who withdrew after consent). Report the drop-off rate and assess if attrition by condition differed.

• Findings show no significant material effect of earmarking on completion or consent rates. Equivalence testing suggests any effect is smaller than the pre-specified 2% cut-off. Click rate exploratory analysis suggests a small, non-significant difference.

Discussion

The discussion interprets null findings, identifies limitations, and specifies implications for the use of earmarking in research volunteer incentives.

• Disjointed limitations section; no flow and connections to theory. Reorganize into a single paragraph with each limitation connected to its effect on results and proposing future research directions.

• Limited discussion of the theoretical explanations for why earmarking may not prompt participation. Incorporate behavioral economic theories or motivation theories describing the distinction between giving and participating.

• The shift to applied implications is abrupt. Add a bridging sentence connecting the theoretical findings to practical recommendations.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: Yes:  Stefanos Balaskas

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Sep 8;20(9):e0331498. doi: 10.1371/journal.pone.0331498.r002

Author response to Decision Letter 1


27 Jun 2025

Response to the Editor

Thank you very much for your detailed and constructive feedback on our manuscript, and for recognizing its merit and relevance. Guided by your comments and those of the reviewers, we have undertaken a thorough revision. Below we (i) summarize the most important changes and (ii) respond point-by-point to the specific issues you raised (original comments are italicized).

As concerns framing, it is unclear how the outcome of "study participation" contributes to the literature on earmarking, specifically its effect on the likelihood of donation. My recommendation is to frame the paper differently, starting with the puzzle of why participation in research studies remains low, and whether the promise of (earmarked) donations can boost participation. This would strike me as a more effective framing strategy and require reviewing a slightly different literature.

Following your suggestion, the introduction now opens with the challenge of raising survey response rates and reviews the main strategies scholars currently use—most prominently: incentives. We highlight the surprising under-exploration of how to optimize donation incentives, even though they may be preferable to personal incentives in certain instances. We therefore position earmarking as a promising means of making donation incentives more effective and outline why this possibility has practical and theoretical importance. We hope you will agree that this revised framing now presents a clearer and more compelling account of our study’s contribution.

As concerns the study design, it is unclear 1) whether academics are a relevant sample from which we can generalize

As noted in our response to Reviewer 1, we completely agree that academics might be a population that is typically not the primary target of charitable campaigns. However, we equally see no theoretical reason to assume that academics differ systematically from the general public on traits that drive prosocial behavior. In fact, academics fit our study well precisely because— as the revised manuscript now spells out—they are relatively affluent, highly time-constrained, and, therefore, difficult to motivate with small personal incentives. Accordingly, testing donation‐based incentives in such a population should provide a meaningful test of their effectiveness.

2) whether some outcomes used are meaningful (especially "consent" -- perhaps better labelled as "survey begun")

Following your suggestion and the comment of Reviewer 1, we have now re-labeled the variable formerly called “consent” to “survey begun” throughout the manuscript, thereby clarifying that the variable captures the point at which a participant clicked the survey link and started the questionnaire by providing consent. All text, tables, and figures have been updated accordingly.

and 3) if the treatment worked (manipulation checks and power calculations missing)

We agree that we cannot be entirely sure that the manipulation worked as intended, a point that was also raised by Reviewer 2. Because the email invitation was the only vehicle for the treatment, a post-survey manipulation check would have strengthened internal validity. We explicitly acknowledge this limitation and recommend its inclusion in future work.

Sensitivity analysis: Because the sampling frame comprised all 6,711 eligible academics, N was fixed ex-ante. Instead of an a-priori power analysis, we, therefore, report a sensitivity analysis showing that, at α = .05 and 1–β = .80, the design could detect a ≥ 2-percentage-point lift (12 % vs. an illustrative 10 %).

Moreover, it appears that the results need to be interpreted differently in that any donations for study participation, not just earmarked donations, fail to be effective. This seems like the more important finding but this baseline effect is not discussed.

While we share your intuition given the overall low response rate, we believe that our study design does not allow for a definitive conclusion on this point, as all conditions included a $5 donation incentive. A proper test of that hypothesis would arguably require a true no-incentive control group. In the present manuscript, we, therefore, limit our conclusions to the more specific question of whether adding earmarking to a donation incentive improves participation. Our data indicate that it does not.

We again thank you for your constructive comments and guidance. We believe that our revisions have substantially strengthened the manuscript. Naturally, we are happy to make any further improvements you deem helpful.

Response to Reviewer 1

Reviewer #1: This article looks at the question of whether the opportunity to earmark donations affects study participation among academics at psychology departments. The motivation for this question is based on the finding in the literature that earmarking donations often affects willingness to donate, and the authors are interested in whether this extends to study participation. The manuscript is clearly written, and the experiment is cleanly executed, which make for an interesting read. However, my main comments are to do with whether study participation is a relevant outcome for donations being earmarked versus not, and whether an academic population is necessarily informative for donation behaviour more broadly. The following points expand on this and summarize a few other comments that I hope will help the authors.

Thank you for the overall positive evaluation and the helpful suggestions that helped us to improve the manuscript.

• First, it is not clear why the effect of earmarking donations on study participation is an interesting question because earmarking usually relates to donations to charities and international organizations, as the authors also note, whereas study participation of the sort we see in this manuscript is academic. Therefore, it is not clear why this is an interesting and relevant question and whether (academic) study participation is a relevant and useful outcome for studying the effects of earmarking donations.

Thank you for raising this important point. In the revised manuscript, we now more clearly articulate why examining the effect of earmarking—as a specific way of designing donation incentives—on study participation is both relevant and meaningful. Specifically, we explain that donation incentives may be particularly useful in contexts involving highly affluent, time-constrained populations—such as academics—who are less responsive to small personal incentives due to high opportunity costs. In such cases, donation-based incentives offer a viable and potentially more appropriate alternative. We selected academics as our sample precisely because they exemplify such a time-poor population with high opportunity costs for whom donation incentives are particularly appropriate.

Moreover, despite their practical relevance, there is limited systematic evidence on how donation incentives can be optimized. Examining whether a well-documented design feature to increase donations—earmarking—can increase study participation thus addresses a practical need and extends the theoretical framework surrounding the earmarking construct.

• Somewhat related to the previous point, are academics generally a good population for studying the effects of earmarking donations? In other words, academics are not usually the main target audience for charitable donation so studying their receptiveness to earmarking versus not is not necessarily informative. I realize the charity in question is relevant to academics but the literature that the manuscript is situated in is broader and the examples of charities given are also very different from the kind of society/charity used in the experiment. The conclusions drawn seem to be very broad, which is why it becomes even more important to consider whether the type of sample used in the experiment is relevant for the kinds of charities the results are alluding to.

We acknowledge that our study examines a specific population for studying earmarking effects. However, importantly, studies on earmarking do not typically focus on a “typical donor” population either. Recent work has used general-population samples recruited via online platforms such as Prolific and MTurk (Esterzon et al., 2022; Özer et al., 2024) or nationally representative samples from 25 countries (Fuchs et al., 2020). These studies consistently find a positive effect of earmarking across varied demographic and cultural contexts, suggesting that the underlying mechanism is not specific to a particular donor group.

Moreover, while we acknowledge that academics may not be a primary target audience for large-scale fundraising campaigns, there is no theoretical or empirical reason to believe they differ systematically from the general population on dimensions relevant to prosocial behavior. In fact, evidence points in the opposite direction: higher education was found to be associated with increased charitable giving (Nakamura et al., 2025), and both empirical studies (Shaker & Palmer, 2012) and reports from institutional campaigns from U.S. universities (e.g., Anthony, 2023; WCU, 2023; UTSA, 2025) show that faculty and staff frequently engage in internal giving campaigns. These donations often support causes aligned with science, education, and equity—domains closely related to the professional identity of academic psychologists.

Most importantly, however, the primary focus of our study is not on increasing charitable donations, but on testing an intervention—earmarking—aimed at increasing survey response rates. As noted previously, we believe that academics represent a particularly difficult population to motivate for survey participation, and are therefore suited for investigating the research question at hand.

Nevertheless, we acknowledge that our sample has distinctive features that may limit the generalizability of our findings—most notably participants’ extensive experience with experimental manipulations, which can dampen the impact of such manipulations (Chandler et al., 2015; Krefeld-Schwalb et al., 2024). We now acknowledge this limitation more explicitly in the revised limitations section and call for future research to examine whether less experienced or more diverse populations respond differently, providing more generalizable insights into earmarking effects.

• The choice of the charity makes sense given the target audience, but it is unclear whether it is one that participants would care deeply about especially in comparison to charities that are usually studied in this literature. Did the authors perhaps ask questions to gauge whether participants trusted the Society to choose the right causes, spend the money fully et cetera? Similarly, was this a charity where participants likely cared enough about it in general to care about how their donation money would be spent?

Thank you for raising these important points. We acknowledge that our survey did not include explicit measures of participants’ trust in SIPS or the extent to which they valued its mission. While we believe many participants likely had strong confidence in SIPS’s stewardship—given its sustained engagement in the academic psychology community, its strong commitment to open and transparent research practices, and annual conferences that attract 500–1,000 attendees—we recognize that individual perceptions of trust still might vary. Likewise, concerning mission importance, we deliberately chose SIPS, a charity deeply embedded in participants’ professional community, to leverage identity congruence and value alignment, factors that have been shown to increase donation willingness (Chapman et al., 2025; Kesberg & Keller, 2021), however, we cannot be sure that mission importance was high for every participant. We now explicitly highlight this limitation in the revised manuscript and recommend that future research include direct measures of trust and mission importance, to clarify their moderating role in earmarking effects.

• One of my main concerns when reading about the treatment conditions was whether the purposes were sufficiently different for participants to care enough to want to complete the study and be able to earmark. The authors briefly touch upon this in the conclusion, but this may be a more significant factor than presented because, in the real world, charities, especially international ones, are donating to very disparate causes with a lot of differentiation. For instance, when donating to something like UNICEF, you can often choose between different causes and different countries and, therefore, it is much more likely that those who are donating will care about which cause their donation goes to. In comparison, roughly the same set of recipients between three purposes that are all to do with helping academics within the field of psychology is unlikely to evoke the same reaction or interest in differentiating between the various causes. It may also, in general, not be a set of causes that participants have strong feelings about in any case, again especially in comparison with the types of recipients that large charities, whether national or international, help.

We appreciate the thoughtful observation that the three SIPS purposes offered by the organization—supporting an open-access preprint platform and two forms of travel grants—are much more closely related than the highly differentiated options donors typically encounter at large humanitarian NGOs. We fully agree that this limited cause differentiation is a genuine limitation of our design when it comes to assessing the external validity of our findings. At the same time, that very homogeneity provides a useful feature: because the choices are not very heterogeneous, any increase we observe can likely only stem from the perceived impact pathway of earmarking, and not from strong pre-existing cause preferences. In other words, our study offers a conservative test of the mechanism. If earmarking produces a measurable uplift even when the available options are considerably similar, it should work at least as well—and plausibly better—when donors can choose among more differentiated causes. To make this rationale explicit, we have expanded the Discussion to describe our design as a conservative test and to call for future research that systematically varies cause differentiation as a potential moderator of the earmarking effect.

• In the analysis, it is unclear why "consent" is a dependent variable of interest. Participants who consented may or may not have reached the treatment page so it’s not obvious what information can be gleaned from this, and therefore it’s a bit odd that this is a main dependent variable in the analysis.

Thank you for highlighting the point with the “Consent” variable. We noticed that we provoked this issue in how we labeled and described this outcome variable. We recognize that this label may have been misleading, and we appreciate the opportunity to clarify both the variable’s role and how it fits within our experimental design.

In our study, treatment exposure occurred in the invitation email, which participants received before deciding whether to click through to the survey. As such, participants who chose not to enter the survey had already been exposed to the treatment condition. Therefore, the variable we initially called “consent” does not represent pre-treatment baseline behavior or a decision uninfluenced by condition. What we were capturing with that variable was whether participants clicked on the survey link and began the survey (indicated consent on the first page of the study). As suggested by the editor, a more accurate and informative label for this outcome is “survey begun.” We have revised the manuscript accordingly, updating all references to the variable across the text, tables, and figures to reflect this clarification.

• The authors summarize their power calculation, but it would be helpful to know whether a change of 2 percentage points would be meaningful in this context, as it seems rather low. It would also be helpful to know what the basis is for the 10% assumed participation rate, unless that is simply an example in which case stating that would add

Attachment

Submitted filename: Response to Reviewers.docx

pone.0331498.s003.docx (47.3KB, docx)

Decision Letter 1

Bernhard Reinsberg

9 Jul 2025

PONE-D-25-07209R1Earmarking donations to boost study participation? Evidence from a field experimentPLOS ONE

Dear Dr. Raff,

Thank you for submitting your manuscript to PLOS ONE. We have reached a decision of "minor revision" (without external review). R1 is satisfied with the revisions as they address their points raised. R2 recommends a minor revision, asking you to do a better job in interpreting the substantive effects of the treatment and to discuss the wider implications of whether earmarking is overall a good strategy to boost donations, especially given a host of literature on the negative performance effects of earmarking.

We agree with R2's points, even though we are also aware that some of these asks fall out of the scope of your analysis. Please do a good faith effort to address these points. We would recommend you take a look at these studies on the effectiveness of earmarking, which should help contextualize the findings.

https://doi.org/10.1086/736339

https://doi.org/10.1111/rego.12632

https://doi.org/10.1017/S0020818323000085

Please note: the revised version will not be sent back to the reviewers. We hope this will accelerate the decision-making process.

Please submit your revised manuscript by Aug 23 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Bernhard Reinsberg, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Partly

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #3: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: After careful consideration i believe the authors have adequately addressed my comments and concerns.

Reviewer #3: This article presents findings from a survey experiment on whether allowing participants to earmark charitable contributions given as an incentive for participation increases the response rate. Counter to expectations derived from literature on charitable giving, it finds that it does not. The article is clearly written and makes a succinct point. I think it is worthy of publication. However, I would suggest some further revisions to clarify the findings as well as their contribution and implications.

Most importantly, while the brief literature review covers key explanations for why earmarking should encourage charitable giving, it does so in a theoretical register. To help interpret the findings, the authors should provide substantive discussion of the findings of this literature, including the type of effects and their magnitude. For example, the article states that earmarking encourages charitable giving. However, it would be helpful to clarify whether it incentivises people to give who otherwise would not, or if it increases the amount they give. Relatedly, the authors should provide evidence on the magnitude of the effect for charitable giving found in previous studies to provide a sense of what effects we might be looking for in this experiment. Importantly, there must be some minimum level of effectiveness of earmarking for the research to be relevant, so that should be clearly stated. (This is the most important mechanism to provide substantive insights on, but it would be helpful to also provide brief substantive discussion of the other mechanisms highlighted in the literature review).

As a second point, the article claims that there is no evidence of harms of earmarking, and it might yield benefits. I was sceptical of this claim for two reasons: (1) As I understand it, earmarking is bad for charities, as it prevents them from using funds in the most effective way; and (2) presumably there is some additional cost to the researchers, even if marginal, to manage this data. The claims here should be clarified accordingly.

Third, I wondered if the authors might expand a bit more on what they think explains the lack of results. This may be easier to clarify once the expectations are more clearly stated (by drawing out the substantive content in the literature review as noted in point 1)

Minor points:

- The response rates of this study and other studies of 9-10% are first mentioned in the analysis strategy. I wondered if this could be mentioned earlier to further motivate the study.

- The first sentence of the abstract: “Charitable donations are often the best way to incentivise study participation” – requires citation and/or evidence, or should be toned down.

- It would be helpful to clarify throughout that this is about increasing response rates to surveys.

- The article states that “affluent or time-poor individuals may face opportunity costs that no realistic cash payment within the study budget can offset [15, 16]. Under such circumstances, even modest personal incentives are unlikely to attract these participants.” � but then presumably donation-based incentives also wouldn’t impact them? If there is evidence that people would be more persuaded by a donation than direct payment, stating it more explicitly would be important. Otherwise, I think the justification can rest on the idea that sometimes it is not appropriate to pay respondents and that it can be difficult (especially with data protection/management requirements, and for online surveys where you would not be able to hand cash directly to respondents)

- The article mentions the importance of trust to earmarking – but I wondered if it is also possible that if people really trust the organisation, they don’t care about earmarking because they figure the organisation can decide what to do with the resources more effectively than they can.

- On the ethics statement, clarify the language - The study was not anonymous, rather no identifying data was collected/respondents remained anonymous

- I agree with one of the previous reviewers’ comments that a brief reflection on the possibility that the $5 donation had no effect on participation at all is worth mentioning – even though of course the study did not test this, pointing it out as a pathway for future study may be valuable.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #2: Yes:  Stefanos Balaskas

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Sep 8;20(9):e0331498. doi: 10.1371/journal.pone.0331498.r004

Author response to Decision Letter 2


14 Aug 2025

Response to the Editor

Thank you for submitting your manuscript to PLOS ONE. We have reached a decision of "minor revision" (without external review). R1 is satisfied with the revisions as they address their points raised. R2 recommends a minor revision, asking you to do a better job in interpreting the substantive effects of the treatment and to discuss the wider implications of whether earmarking is overall a good strategy to boost donations, especially given a host of literature on the negative performance effects of earmarking.

Thank you for the positive feedback and for giving us the opportunity to revise the manuscript and address the remaining points raised by Reviewer 3. Guided by your comments and those of the reviewer, we have undertaken a further revision. Below we respond point-by-point to the specific issues that have been raised (original comments are italicized).

We agree with R2's points, even though we are also aware that some of these asks fall out of the scope of your analysis. Please do a good faith effort to address these points. We would recommend you take a look at these studies on the effectiveness of earmarking, which should help contextualize the findings.

https://doi.org/10.1086/736339

https://doi.org/10.1111/rego.12632

https://doi.org/10.1017/S0020818323000085

Please note: the revised version will not be sent back to the reviewers. We hope this will accelerate the decision-making process.

Thank you for providing these references. Combined with Reviewer 3’s request to consider the potential harms of earmarking, this prompted us to examine the issue more closely and to qualify our recommendation regarding its use. In the previous version, we had already acknowledged that earmarking may be suboptimal for charities because it reduces their flexibility in allocating funds. We now also highlight evidence from Heinzel et al. (2023) and Heinzel et al. (2025a, b), showing that earmarking can be associated with worse project performance in the context of international development organizations. Accordingly, in the implications section we now state that earmarking should only be implemented when its potential advantages clearly outweigh these downsides.

Response to Reviewer #3

This article presents findings from a survey experiment on whether allowing participants to earmark charitable contributions given as an incentive for participation increases the response rate. Counter to expectations derived from literature on charitable giving, it finds that it does not. The article is clearly written and makes a succinct point. I think it is worthy of publication. However, I would suggest some further revisions to clarify the findings as well as their contribution and implications.

Most importantly, while the brief literature review covers key explanations for why earmarking should encourage charitable giving, it does so in a theoretical register. To help interpret the findings, the authors should provide substantive discussion of the findings of this literature, including the type of effects and their magnitude. For example, the article states that earmarking encourages charitable giving. However, it would be helpful to clarify whether it incentivises people to give who otherwise would not, or if it increases the amount they give. Relatedly, the authors should provide evidence on the magnitude of the effect for charitable giving found in previous studies to provide a sense of what effects we might be looking for in this experiment. Importantly, there must be some minimum level of effectiveness of earmarking for the research to be relevant, so that should be clearly stated. (This is the most important mechanism to provide substantive insights on, but it would be helpful to also provide brief substantive discussion of the other mechanisms highlighted in the literature review).

We appreciate your positive feedback on our manuscript. We also thank you for highlighting the importance of specifying the nature and magnitude of earmarking effects in prior research. We have revised the manuscript to address these points more explicitly:

First, we have now more clearly specified the nature of earmarking effects in charitable giving. In the revised manuscript, we now explicitly specify that, while theoretically earmarking could increase both the willingness to donate and the amount donated, empirical findings more consistently show effects on willingness to donate. We argue that it is precisely this increased willingness to donate rather than increases in donation amounts that positions earmarking as a potentially valuable intervention in the context of incentivizing survey participation.

Second, regarding your suggestion to discuss the magnitude of earmarking effects from prior studies, we agree that setting clear expectations is important. However, we are cautious about making direct comparisons because previous research has examined different outcomes (e.g., webpage engagement in Costello & Malkoc, 2022; willingness to donate in Fuchs et al., 2020). Since our study is the first to test the effect of earmarking on study participation, we considered it more appropriate to base expectations on effect sizes from studies that directly target this outcome—thereby avoiding an apples-to-oranges comparison between willingness to donate and willingness to participate in a study.

In this regard, meta-analyses of web- and electronic-based survey research report that study participation can increase substantially—by odds ratios ranging from 1.39 to 2.43—when financial incentives are offered (David & Ware, 2014; van Gelder et al., 2018; Edwards et al., 2023). Assuming a baseline participation rate of 10%, these odds ratios translate into increases of roughly 4 to 12 percentage points, illustrating that participation can respond substantially to changes in incentive structures. Based on this, we set a 2 percentage-point increase (from a 10% baseline) as the minimum effect size that would render earmarking practically worthwhile in our context. From the researcher’s perspective, an effect of this magnitude could potentially justify the additional organizational effort required to implement earmarked donations, including establishing a collaboration with a charity, ensuring proper fund allocation, and managing follow-up communication. From the charity’s perspective, the expected increase in donations could potentially outweigh the downsides of earmarking. This rationale is now explicitly reflected in the revised manuscript.

As a second point, the article claims that there is no evidence of harms of earmarking, and it might yield benefits. I was sceptical of this claim for two reasons: (1) As I understand it, earmarking is bad for charities, as it prevents them from using funds in the most effective way; and (2) presumably there is some additional cost to the researchers, even if marginal, to manage this data. The claims here should be clarified accordingly

We appreciate this point and agree that the potential downsides of earmarking—such as reduced flexibility for charities and additional administrative burdens for researchers—are important to acknowledge. Our original statement that earmarking “does not harm” referred specifically to participation rates. We have now made this explicit and qualified our recommendation to use earmarking by noting that its implementation should always be weighed against potential downsides identified in prior research (e.g., reduced flexibility, worse project performance) and the added administrative costs for researchers, and should therefore be implemented only when its potential advantages clearly outweigh these downsides.

Third, I wondered if the authors might expand a bit more on what they think explains the lack of results. This may be easier to clarify once the expectations are more clearly stated (by drawing out the substantive content in the literature review as noted in point 1)

As we now make clear, thanks to your earlier suggestion, our expectation was that earmarking would motivate more people to participate. One possible reason it did not do so in this context is a difference in psychological ownership of the donated funds between donations made with one’s own money and those made through externally funded donation incentives. While we had implied this previously, we now make it more explicit in the future research section:

“A promising direction for future studies is to examine the role of psychological ownership in differentiating donations made with one’s own money from externally funded donation incentives, as theorized before, which could be a reason why earmarking in this particular instance did not prove effective.”

Minor points:

- The response rates of this study and other studies of 9-10% are first mentioned in the analysis strategy. I wondered if this could be mentioned earlier to further motivate the study.

We appreciate the suggestion. In our view, the reference to response rates from other studies serves primarily to provide a plausible baseline for the sensitivity analysis, rather than to motivate the study itself. For this reason, we believe it fits most naturally within the analysis strategy section. If you or the editor still believe this information should appear earlier, we would be happy to try to integrate it accordingly.

- The first sentence of the abstract: “Charitable donations are often the best way to incentivise study participation” – requires citation and/or evidence, or should be toned down.

We thank you for this comment and have revised the first sentence of the abstract to state more cautiously that, in many cases, charitable donations can be the most suitable form of incentivization. This is supported by reasoning presented later in the manuscript, where we note that budgetary, ethical, or participant-specific constraints may at times preclude the use of personal incentives.

- It would be helpful to clarify throughout that this is about increasing response rates to surveys.

Following your suggestions, we have revisited the manuscript to ensure that our focus on survey response rates is consistently clear. While the abstract and introduction already made this focus explicit, we have further reinforced it by adding the phrase “in attracting respondents for surveys” when describing prior research and by specifying “to increase response rates” when discussing the importance of understanding how to implement donation incentives optimally. We hope that these adjustments make the objective of the paper clearer from the outset.

- The article states that “affluent or time-poor individuals may face opportunity costs that no realistic cash payment within the study budget can offset [15, 16]. Under such circumstances, even modest personal incentives are unlikely to attract these participants.” � but then presumably donation-based incentives also wouldn’t impact them? If there is evidence that people would be more persuaded by a donation than direct payment, stating it more explicitly would be important. Otherwise, I think the justification can rest on the idea that sometimes it is not appropriate to pay respondents and that it can be difficult (especially with data protection/management requirements, and for online surveys where you would not be able to hand cash directly to respondents)

Thank you for this comment and for prompting this clarification. Indeed, the findings by Khan et al. (2020) show that when the amount that can be paid per person is small, donation incentives can be more motivating than direct payments. While both incentive types may have limited cognitive valuation due to the low amount, donations add an affective component that can make them more compelling. We now make this mechanism explicit in the manuscript.

- The article mentions the importance of trust to earmarking – but I wondered if it is also possible that if people really trust the organisation, they don’t care about earmarking because they figure the organisation can decide what to do with the resources more effectively than they can.

Thank you for this thoughtful observation. We already note in the manuscript that trust is likely of lower importance in the context of donation incentives for study participation, given the relatively small amounts involved and the lower psychological ownership of the donated funds. This suggests that the trust-based mechanism through which earmarking can increase donations may be weaker in this setting. As you point out, it is also possible that when trust in the organization is high, offering earmarking may not increase donations via this pathway because participants believe the organization can allocate resources more effectively than they can. However, even when trust does not play a role in a particular instance, this does not necessarily mean that earmarking would have no positive effect. Arguably, the main reason for its motivational potential lies not in increasing trust, but in its ability to satisfy the basic psychological needs for autonomy and competence, as explained by cognitive evaluation theory.

- On the ethics statement, clarify the language - The study was not anonymous, rather no identifying data was collected/respondents remained anonymous

Thank you for pointing this out. We have removed the wording that described the study as “anonymous,” as the manuscript already specifies that no identifying data were collected.

- I agree with one of the previous reviewers’ comments that a brief reflection on the possibility that the $5 donation had no effect on participation at all is worth mentioning – even though of course the study did not test this, pointing it out as a pathway for future study may be valuable.

Again thank you for this comment. It prompted us to adapt the first limitation referring to the low amount. While we already note that the $5 incentive may have been too small, we now also connect this with the specific characteristics of donation incentives identified earlier, namely, lower psychological ownership of the funds. It may be the case that, due to this lower psychological ownership, earmarking within a donation incentive only has a measurable effect at higher amounts (compared to the charitable donation scenario).

References:

Costello, J. P., & Malkoc, S. A. (2022). Why Are Donors More Generous with Time Than Money? The Role of Perceived Control over Donations on Charitable Giving. The Journal of Consumer Research, 49(4), 678–696. https://doi.org/10.1093/jcr/ucac011

David, M. C., & Ware, R. S. (2014). Meta-analysis of randomized controlled trials supports the use of incentives for inducing response to electronic health surveys. Journal of Clinical Epidemiology, 67(11), 1210–1221. https://doi.org/10.1016/j.jclinepi.2014.08.001

Edwards, P. J., Roberts, I., Clarke, M. J., DiGuiseppi, C., Woolf, B., & Perkins, C. (2023). Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews, 2023(11), MR000008.

https://doi.org/10.1002/14651858.MR000008.pub5

Fuchs, C., de Jong, M. G., & Schreier, M. (2020). Earmarking Donations to Charity: Cross-cultural Evidence on Its Appeal to Donors Across 25 Countries. Management Science, 66(10), 4820–4842. https://doi.org/10.1287/mnsc.2019.3397

Heinzel, M., Cormier, B., & Reinsberg, B. (2023). Earmarked Funding and the Control–Performance Trade-Off in International Development Organizations. International Organization, 77(2), 475–495. https://doi.org/10.1017/S0020818323000085

Heinzel, M., Reinsberg, B., & Zaccaria, G. (2025a). Core funding and the performance of international organizations: Evidence from UNDP projects. Regulation & Governance, 19(3), 957–976. https://doi.org/10.1111/rego.12632

Heinzel, M., Reinsberg, B., & Siauwijaya, C. (2025b). Understanding Resourcing Trade-offs in International Organizations: Evidence from an Elite Survey Experiment. The Journal of Politics. https://doi.org/10.1086/736339

Khan, U., Goldsmith, K., & Dhar, R. (2020). When Does Altruism Trump Self-Interest? The Moderating Role of Affect in Extrinsic Incentives. Journal of the Association for Consumer Research, 5(1), 44–55. https://doi.org/10.1086/706512

van Geld

Attachment

Submitted filename: Response_to_Reviewers_auresp_2.docx

pone.0331498.s004.docx (28KB, docx)

Decision Letter 2

Bernhard Reinsberg

18 Aug 2025

Earmarking donations to boost study participation? Evidence from a field experiment

PONE-D-25-07209R2

Dear Dr. Raff,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Congratulations to a fine piece of research!

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Once again, congratulations on your fine contribution, and thank you for publishing with PLOS ONE.

Kind regards,

Bernhard Reinsberg, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

All comments addressed.

Reviewers' comments:

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Characteristics of participants.

    Participants who completed the study (n = 406) dependent to the experimental condition.

    (DOCX)

    pone.0331498.s001.docx (17.8KB, docx)
    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0331498.s003.docx (47.3KB, docx)
    Attachment

    Submitted filename: Response_to_Reviewers_auresp_2.docx

    pone.0331498.s004.docx (28KB, docx)

    Data Availability Statement

    The study materials, supplementary analyses, and data are publicly available from the Open Science Framework repository (https://osf.io/ewz6v/).


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES