Abstract
Ethics in social science experimentation and data collection are often discussed but rarely articulated in writing as part of research outputs. Although papers typically reference human subjects research approvals from relevant institutional review boards, most recognize that such boards do not carry out comprehensive ethical assessments. We propose a structured ethics appendix to provide details on the following: policy equipoise, role of the researcher, potential harms to participants and nonparticipants, conflicts of interest, intellectual freedom, feedback to participants, and foreseeable misuse of research results. We discuss each of these and some of the norms and challenging situations of each. We believe that discussing such issues explicitly in appendices of papers, even if briefly, will serve two purposes: more complete communication of ethics can improve discussions of papers and can clarify and improve the norms themselves.
Keywords: ethics, randomized controlled trials, primary data collection, surveys, methodology
Social science researchers engaged in primary data collection often consider a range of ethical issues during planning but rarely discuss them in published articles. We believe that building explicit steps for considering and discussing ethical issues can lead to better research and better communications about research, and thus better impact of research as well. We propose a structured appendix to accompany social science papers that report on primary data collection efforts.
We believe that Sen’s (1) capability approach provides a useful framework to inform a structured appendix, as it focuses on people’s opportunities to set and achieve activities and goals for themselves. This framework, along with Rawls (2), is familiar to most social scientists and applies to both participants and affected nonparticipants. We acknowledge the diversity of life goals across people, but there is a higher-order common requirement to have the capabilities to set and pursue those goals. Ethical research should be designed to generate socially valuable information to advance these basic capabilities, while protecting the basic interests of participants. This requires research protocols not only be procedurally ethical as per international research standards but also ethical after considering specific contextual factors such as cultural, gender, and local institutional norms. London (3) starts from a principle of equal concern, that every participant is the moral equal of all members in the community, and derives several operational criteria that serve as guideposts for this paper and proposal. These include the avoidance of unnecessary risk, special concern for the basic interests of participants, and “social consistency.” The latter requires that the sum of incremental risks to participants, minus their direct benefits (call this the net risk), must not be greater than the net risks faced by those in other socially sanctioned activities, like emergency workers. London (3) acknowledges that this calculation is difficult, but “the moral goal of such judgements is clear—the point is to ensure that there is a publicly available justification for the claim that each study participant is treated as the moral equal of every other participant … and of the community members in whose name research is conducted.” These criteria motivate our proposal that authors include a structured ethics appendix in working papers and published online appendices. Such information will highlight the substance of the ethical consideration underlying the study and could also be used in grant applications. We provide a framework below, as a starting point for guidance.
Institutional review boards (IRBs), when present, aim to protect research participants but ultimately examine a narrow set of ethics issues. There are no “ethics review boards” with a mandate over all ethical issues in research. Indeed, a broad range of important ethical issues pertaining to primary data collection research and especially randomized controlled trials (RCTs) are outside the purview of IRBs. Such issues, often left to self-regulation and peer review during grant applications and publications, deserve more thorough reflective consideration. Active consideration and discussion of these matters can build more consistent norms, improve the philosophical consistency of the norms, and also improve adherence to those norms.
We propose a “structured ethics appendix” for social science researchers engaged in primary data collection (see SI Appendix). Social science writing style is often fairly terse on methods issues, in particular information relevant for an understanding of ethical issues. Many journals require an IRB approval number and nothing more. This leaves readers to fill in the unstated information with potentially incorrect assumptions. The short-form nature of some public discussions can then exacerbate misunderstandings. The proposed appendix is not a checklist. Instead, it is a structured, but brief, set of nine questions which hopefully provide researchers a concise and consolidated platform to spark thoughtful reflection and consideration of ethical issues before a project begins and provide relevant information to readers of completed papers.
The topics discussed are 1) policy equipoise and scarcity, 2) role of researchers with respect to implementation, 3) potential harms to participants or nonparticipants from the interventions or policies, 4) potential harms to research participants or research staff from data collection (e.g., surveying, privacy, and data management) or research protocols (e.g., random assignment), 5) financial and reputational conflicts of interest, 6) intellectual freedom, 7) feedback to participants or communities, 8) foreseeable misuse of research results, and 9) other ethics issues.
We are writing as economists with experience in primary data collection and design and implementation of RCTs, not as trained ethicists. Our discussion is undoubtedly, albeit unintentionally, driven by issues that have arisen in our own work. The questions were selected after several rounds of community feedback, from social scientists involved in implementing interventions, and from a wider online audience. We incorporated several suggestions in this version. We intend for the appendix to be a means of prompting discussions too often left in the background and offer this as a “living document” that we and others can improve through use and feedback. This article accompanies articles on a set of six harmonized RCTs of community monitoring of common pool resources (CPRs). The trials randomly assigned the introduction of community monitoring to communities (4). Community monitoring is a form of governance that aims to improve CPR management—in these trials, for groundwater, surface water, and forests. The community monitoring was typically introduced by nongovernmental organizations. We have no reason to believe the researchers failed to adhere to strong ethical practices. However, this kind of collaboration between implementing entities and researchers do make salient many of the points we raise, points that we believe are better off spoken about explicitly. These interventions introduce new governance institutions, and thus ethics questions that pertain to potential harm to both participants and nonparticipants; the role of the researchers vis-à-vis the intervention and the implication that has for informed consent; and the multiple roles of researchers in organizing, reporting, and providing feedback on the interventions themselves needs to explicitly discussed.
Policy Equipoise and Scarcity
Is There Policy Equipoise? That Is, Is There Uncertainty Regarding Participants’ Net Benefits from Each Arm of the Study Relative to the Other Arms and to the Best Possible Policy to Which Participants Could Have Access? If Not, Ethical Randomization Requires Two Conditions Related to Scarcity: 1) Was There Scarcity, i.e., Did the Inclusion of Multiple Arms Change the Expected Aggregate Value of the Programs Delivered, and 2) Do All Ex-ante Identifiable Participants Have Equal Moral or Legal Claims to the Scarce Programs?
Freedman (5) argues that the therapeutic obligation of doctors generates a “clinical equipoise” ethical requirement for medical trials: The expert community must not have certainty that any arm in a trial is better therapeutically than any other arm.* For social science in particular, the word “certainty” can render such a requirement toothless: just note that even theories with strong empirical evidence will have some level of uncertainty regarding their applicability in a new context. Furthermore, in many cases one treatment arm clearly dominates another from the perspective of the participants, yet the better treatment arm is not viable as a policy for all, either due to scarcity or other practical or political issues. A study testing the returns to a large cash transfer is a perfect example. Yet, evidence from the “better” treatment arm still serves an important social value, even if limited to the generation of abstract knowledge. This renders the concept of clinical equipoise wrong for social science experiments.
Is there a similar obligation for social scientists organizing or participating in RCTs? MacKay (9, 10), building on the same principles as London (3), argues persuasively that a sufficient requirement for ethical randomization is “policy equipoise.” We have adopted that view here.
Policy equipoise builds on clinical equipoise but considers resource trade-offs explicitly. Two arms of an RCT are in equipoise when there is meaningful uncertainty about the efficacy of each arm in achieving the relevant outcomes of the study for all participants. Policy equipoise requires that all arms of the study be in equipoise with each other and with the best “proven, morally and practically attainable and sustainable” alternative policy for achieving improvements in the relevant outcomes of the study (9). Each of those words in quotes carries weight. “Proven” implies scientific consensus (and not, for example, adherence to any one methodology for creation of evidence). Clearly, this is not a strictly binary concept, but rather about the degree of certainty of relevant experts and stakeholders, and what is “proven” in one time and place may not be so in another. This also relates to what is meant by “meaningful uncertainty”: naturally, no amount of social science research will render judgements that are perfectly certain; rather, by “meaningful uncertainty” we mean a degree of uncertainty deemed important and likely enough as to make reasonable and informed stakeholders disagree on the optimal policy. “Morally … attainable” requires that the alternative policy be consistent with individuals’ rights and liberties. “Practically attainable” means that the government or implementing agency has the resources to put the alternative policy into effect. “Sustainable” means that the government or implementing agency could maintain the policy, “given a just system of resource procurement and allocation” (9).
The policy equipoise requirement is a sufficient condition for randomization, but it is quite strict. First, no participant in any arm of an RCT can be predicted to be better or worse off (in an ethically relevant way, this is, with respect to their capabilities) than he or she would be under any of the other arms of the study. Second, this equipoise must extend to the appropriately defined counterfactual policy, which constitutes whatever policy would have occurred in the absence of the research. Often this counterfactual policy is the current status quo, but that need not be the case. There may be an alternative policy that is moral and practically attainable and for which there is a consensus on its effectiveness (see the hypothetical Progressa case below), or it could be that the current status quo is not itself sustainable. We emphasize a key distinction that we agree is important, made in MacKay (9), that what matters is not what the actual counterfactual policy is but rather what it could be.
If policy equipoise is not satisfied, then randomization can be justified if no participant can be predicted to be worse off in any arm of the study than under the counterfactual policy, and if there is scarcity of the resources required for the arms in which participants are better off. A treatment arm may be unambiguously the “best” for recipients but not viable as a systematic policy for all. The simplest example here, relevant in social science, would be large transfer programs. It is unethical to withhold this policy/treatment unless it is scarce (10).† Budget constraints can make it impossible to reach everyone with a better costly intervention, in which case it may be permissible to allocate access randomly, if the second scarcity condition is also met, that is, if it is not known a priori who should receive that intervention. If all participants have equal claims to a scarce intervention then randomization can be ethical. If some participants have stronger claims (for example, if there is local knowledge that the aid would permit some individuals to set and pursue their life goals better—perhaps because they are especially poor and constrained in ways that the aid could relax), then these participants should have priority (11), rather than being randomized out of access.‡
We highlight one further component of the above definition of policy equipoise: “achieving improvements in the relevant outcomes of the study.” An RCT could, for example, be focused on testing alternatives for reducing child malnutrition. A treatment arm could be cash transfers. If we were to “know” that cash transfers create benefits beyond reducing child malnutrition (a reasonable conjecture, albeit one with uncertainty regarding the types and magnitudes of impacts), should the nonchild malnutrition outcomes be considered when examining if any treatment arm is viably “better” than all of the others? Doing so would require a full-blown multidimensional welfare analysis, weighing all ethically relevant outcomes. We argue that it is typically sufficient to constrain the comparison of treatment arms (and counterfactual policy) to the problem at hand (e.g., paths to improving the primary outcomes of the study). Realistically, anything that dominated in all dimensions with certainty would likely not be “practically attainable.” We do see merit in this discussion, though.
Randomization into different treatment arms, therefore, is unethical if two conditions are met. First, there is not policy equipoise. Second, either the preferred arm does not require resources that are scarce or all participants do not have equal claim to those scarce resources. Some examples can clarify the requirements of policy equipoise and scarcity.
Karlan et al. (13) implements an RCT that compares a treatment arm in which farmers are offered cash grants of approximately $400 to a control group. This RCT violates policy equipoise, because there is no meaningful uncertainty among experts that receiving an offer of a cash grant is better for the recipient than not receiving such an offer. However, randomization remains ethical because of the two principles of scarcity. Providing cash grants to farmers predictably improves their welfare, so the cash grant arm is better than the counterfactual policy of no cash grants. However, there is no consensus that providing cash grants to farmers as a policy is practically attainable and sustainable, given the cost of that policy. This meets the first requirement for scarcity. Nor is there a consensus about which farm households have stronger claims over receipt of cash grants. This meets the second scarcity requirement, and therefore randomization of access to the grants is acceptable.
Similarly, Glewwe et al. (14) violates policy equipoise—there is no uncertainty that eyeglasses benefit those whose eyesight can be corrected by them. However, there is no consensus that eyeglass provision to school children is a cost-effective policy for improving school achievement. Therefore, it was not clear a priori that there was an alternative, sustainable policy that dominates the control arm, satisfying the first scarcity requirement. Randomization occurred at the township level, and there was no a priori knowledge of which township would benefit most from the intervention, satisfying the second scarcity requirement.
Policy equipoise would be violated if there is an expert consensus that an arm of a trial is dominated by another policy that is proven, attainable, and sustainable. A hypothetical but realistic example of a violation of policy equipoise would occur if a government with sufficient resources for a national program agreed to withhold a Progresa-inspired conditional (on school enrollment) cash transfer (CCT) from some households for a pure control group in an RCT (10).§ In this hypothetical case, the first scarcity requirement also fails, because the government is presumed to have sufficient resources to make the CCT available to more households, so this randomization is unethical.
Consider a potential research project studying the effect of mobile money transfers on food security during a drought with a control group receiving nothing and a treatment group receiving the transfers. While policy equipoise is violated because the transfers can be expected to improve food security, the first scarcity requirement is met due to the budget constraint of the project. However, if the researchers know a priori through, say, remote sensing data, which households’ farms were particularly affected by the drought, the second scarcity requirement may be violated. The most affected farmers should be prioritized for transfers, rather than randomized into treatment or control.¶ Thus, such a proposed randomization raises an ethical red flag.
Researcher Roles with Respect to Implementation
Are Researchers “Active” Researchers, i.e., Did the Researchers Have Direct Decision-Making Power over Whether and How to Implement the Program? If Yes, What Was the Disclosure to Participants and Informed Consent Process for Participation in the Program? Providing IRB Approval Details May Be Sufficient but Further Clarification of Any Important Issues Should Be Discussed Here. If No, i.e., Implementation Was Separate, Explain the Separation.
In social science, the role of the researcher is quite varied, and this has important implications for what may constitute ethical research. On one end of the spectrum, the researcher is merely the evaluator, with neither influence over nor responsibility for any of the interventions. In such an instance, understanding the ethics of the intervention may even be the motivation for the research. Then, with more evidence on deleterious effects of the intervention, perhaps such policies can be altered or eliminated.#
On the other end of the spectrum, the researcher is the implementer (“active” in our classification). In such cases, the researcher for example secured funding, hired staff, decided the design of the intervention, and directly implemented it. In this case, it goes without saying that the researcher is responsible for the ethics of the intervention itself. The researcher is changing the world not merely through the dissemination of results of the research but deliberately through the research process itself.
Even when the researcher is “active,” however, it may still be “ethical” to implement arms which raise valid ethical concerns. However, certain criteria ought to be met. First, there must not be a consensus among the relevant expert community about the outcomes that raise the ethical concerns (and this may be the aim of the study). Second, the policy or intervention must be commonplace enough that assessing the intervention is of high social value. In these instances, the onus is on the researcher to establish “policy equipoise” or explain why scarcity limits the extension of expected beneficial arms to the entire population as discussed in Policy Equipoise and Scarcity and MacKay (10).
Two recent examples in social science highlight the importance of this distinction and also point to the complicated nature of defining the researcher’s role in some instances. In Bryan et al. (16), researchers collaborated with a nonprofit organization in the Philippines which ran a 4-mo program that included various secular activities (e.g., savings groups, livelihood promotion, and health education) alongside weekly meetings with a pastor that covered an evangelical Protestant curriculum. To understand the impact of the Protestant curriculum on individuals and their households, the nonprofit organization agreed to randomize across villages whether or not the Protestant curriculum was included in their program. The research did not lead to an increase or decrease in the quantity of sessions conducted, or the number of people reached, by the program. Of note, there is no consensus on the impact of such programs, and they are commonplace in the world. Thus, we argue that there is positive social value in evaluating claims of impact or harm and no ethical quandary for the researchers.
In Kenya, researchers from universities as well as the World Bank collaborated with the Nairobi Water and Sewerage Company, a Kenyan utility company regulated by the government of Kenya. The researchers engaged with the utility company over several years, working broadly and collaboratively in order to increase access to water and sanitation services. Coville et al. (17) report on this collaboration, specifically on an experiment in which one of the treatment arms threatened tenants with having their water access shut off if the water bill was not paid by their landlord. While one could debate the ethics of that intervention arm as a policy, we would argue that if the researchers unambiguously have no “active” role then such a discussion is important and worthwhile but should be discussed as an ethical consideration for the utility company, not the researcher. Suppose counterfactually that the utility company was already shutting off water of tenants whose landlords had not paid and researchers worked with the utility to organize an RCT that shielded some from being shut off. This version of the trial is identical to that described above. However, because the status quo policy potentially infringes on important human rights (and, in fact, cutting off utilities due to nonpayment is a common policy around the world), the ethical issues may make this more important to study. Ultimately, this study also highlights potential ambiguity on the binary categorization of “active.” If a researcher suggests a treatment arm, for example, does this make them responsible for its ethics, even if the researcher has no actual power over the decisions? Putting ethical onus on the researcher could be construed as ignoring (and thus disrespecting) the autonomy and agency of the local policymakers. However, researchers should acknowledge their role in influencing policies and decisions, particularly in long-standing and complicated collaborations, and thus take ethical responsibility for any such influence. We argue that more transparency on this role would be fruitful for social science, so that norms can be better developed on this question.
Extreme examples, however, render this distinction meaningless. For example, research ethicists share a common consensus that medical data from experiments conducted on prisoners in Nazi concentration camps cannot now be used to answer medical research questions. However, even if one believes the Vietnam War was unethical, using the lottery draft to study the impact of military service on later earnings is easily defendable as ethical. Perhaps one key distinction is the purpose of the original implementation, because of the incentives created for researchers. If the original implementation was being done for the sake of research (albeit unconscionable and unethical), then allowing future researchers to use such data creates perverse incentives to implement unethical research. However, when the original implementation was “natural” [e.g., the Vietnam lottery example above, or a government-run public lottery for secondary school scholarship (18), or reservation of village leader slots for women (19)], then we argue that use of such variation is valid ethically even if there are objections to the underlying policy.
Potential Harms to Participants or Nonparticipants from the Interventions or Policies
Does the Intervention, Policy, or Product Being Studied Pose Potential Harm to Participants or Nonparticipants? Related to This, Are Participants or Likely Affected Nonparticipants Particularly Vulnerable? Also Related to This, Are Participants’ Access to Future Services or Policies Changed Because of Participation in the Study? If the Answer Is Yes to Any of the above, What Is Being Done to Mitigate Such Risks?
We highlight two broad issues. First, despite best efforts, IRBs are unable to fully oversee all aspects of potential harms to participants, and local knowledge in particular is invaluable and at the mercy of self-regulation. Second, potential harms to nonparticipants is itself a topic of high interest to researchers on substantive levels but thus also carries important ethical considerations.
Ethical guidelines for research on human subjects are primarily concerned with protecting study participants [Eyal et al. (20)]. Protocols typically expect researchers to highlight the benefits and risks of participation, confidentiality protocols, compensation, and instructions for withdrawing from the study at any stage. While in theory IRBs strive to consider all such issues, in reality IRBs, particularly ones from a different culture, are not always informed enough to raise or adjudicate on them (due to omission by researchers or lack of knowledge of context and practices). Ultimately the responsibility of the researcher extends beyond mere IRB approval. We emphasize, as well, that “harms” extend far beyond mere monetary harm, in particular, for example, for marginalized populations, such as women in many cases, that may be engaged in nonwage labor.
Cultural norms and nuances of the implementation may pose potential harms to participants that are beyond the viable purview of the IRB. For example, we have experienced in Ghana and Nigeria cultural norms or misinterpretations of implementation activities that have led participants to be perceived as being “superior” or to suffer stigma or discrimination, which was mitigated by providing additional information or community discussions. If not addressed, such perceptions can boost or harm the reputation of participating households which may have implications for their social and perhaps economic lives. Elected community leaders in participating communities may suffer abuse and intense competition from political rivals, as interventions are sometimes perceived vehicles for local leaders to maintain political power. When deemed plausible, projects ought to integrate mechanisms to deal with such issues.
While informed consent and IRBs are primarily concerned with study participants, research may also pose harm to nonparticipants. The National Bioethics Advisory Commission of the United States asserts that regulatory oversight for research with human subjects extends beyond the protection of individual research participants to include the protection of social groups (21). The potential effects on nonparticipants become especially salient when researchers are actively engaged in interventions. It is, therefore, important to evaluate the scope and intensity of risks that are likely to impact nonparticipants. In some cases, community-level consent may be appropriate (see Potential Harms from Data Collection or Research Protocols).
We highlight two examples in which nonparticipants are affected and discuss the ethical implications.|| Ashraf et al. (23) conducted an experiment in Zambia on women’s bargaining power and fertility. In one arm women were given access to contraceptives alone and in the other with their husbands. Since both arms of the trial have implications for a nonparticipating husband in terms of child-bearing outcomes, should there be household-level consent for both arms? In this case no, because obtaining such household level consent could trigger domestic violence. Women’s control of their reproductive decisions is a basic human right, as is their freedom from domestic violence. A woman’s nonparticipating husband may be affected, but her human rights must take precedence.
Second, incentivizing students to take part in an antiauthoritarian protest poses a direct risk to participants but also poses risks to nonparticipants engaged in the protest (see ref. 24). Thus, if students may reasonably be expected to be rowdier in a protest than nonstudents, incentivizing more student participation in a protest (may?) increase the risk for the nonstudent nonparticipants of the intervention. Inasmuch as it is important to minimize the risk for the study participants, it is in this case even more critical that the actions of participants do not increase the risk for nonparticipants.
Consideration of risks and benefits for nonparticipants helps evaluate the ethics and establish appropriate mitigation strategies. Naturally, however, indirect effects are easy to posit qualitatively but much harder to confidently predict quantitatively. Also, indeed, studying indirect effects is itself a robust research agenda (e.g., ref. 25). Thus, we do not argue that the mere posing of a possible indirect effect should render a study unethical. Rather, we suggest considering such issues, and as we learn more about them from ongoing research we hope this ethical conundrum can become more evidence-based.
We acknowledge that there are unresolved issues in expanding ethical considerations to groups, which require further discussion and consensus-building. Research risks to communities and nonparticipants suggest that ethical protocols focused on individual research participant protection will be insufficient. Ethical guidance for research in social science in general and RCTs specifically could extend protection beyond mere individuals to social groups i.e., family members, community members, and so on (21). Group implications of identified individual-level harm can serve as a starting point to identify risk to nonparticipants and should be properly assessed and addressed in an ethics review process. Working directly with community representatives may prove fruitful to discuss informed consent and develop methods that minimize potential group harms.
Potential Harms from Data Collection or Research Protocols
Are Data Collection and/or Research Procedures Adherent to Privacy, Confidentiality, Risk Management, and Informed Consent Protocols with Regard to Human Subjects? Are They Respectful of Community Norms, e.g. Community Consent Not Merely Individual Consent, When Appropriate?** Are There Potential Harms to Research Staff from Conducting the Data Collection That Are beyond “Normal” Risks?
The fundamental principle of ethical conduct of human subjects’ research centers around minimizing the risk of harm and respecting agency (i.e., securing appropriate informed consent).
Informed Consent: Data Collection.
Genuine consent is present only when participants have the capacity to fully understand and process the shared information (26). Processes for obtaining informed consent differ widely, and stated protocols and actual practice may differ. Field staff may treat consent as a mere formality where the main task is to get the participant’s approval, thus leading field workers to not fully engage participants. If interviewer and respondent differ on age, education, or status levels, undesirable expectations may ensue due to perceived power or influence. Electronic data collection offers paths to oversight, e.g. with time stamps on informed consent and survey components. However, perfect monitoring is not possible; staff training is critical. Furthermore, gifts, even minor ones such as soap, may have the unintended consequence of distracting from the importance of the informed consent; compensating someone for their time, on the other hand, is also an important principle to respect.
Enumerators are often encouraged to probe during their interviews to obtain more accurate responses. While probing can be useful, it can also create discomfort, be considered intrusive, and expose vulnerabilities of the participant. There is a fine line between probing and harassment.
Moreover, although consent forms provide participants with researcher contact details, they may be less likely to contact the researchers when they experience any privacy violations because of telecommunication charges or perhaps perceived language or status barrier.
Informed Consent: Randomization.
Informed consent on the randomization process itself is often not discussed. In medical trials, partly because the medical professional has a “standard of care” therapeutics to which each patient has a clear right, informed consent on the randomization process is essential to having agency over one’s own health decisions. However, in social science, the intervention is often implemented independently from the researchers (see Potential Harms to Participants or Nonparticipants from the Interventions or Policies), and “informed consent” is effectively implied via voluntary participation in an intervention (e.g., if a lender randomly offers people different loans, their decision to borrow is not forced by the lender or by the researcher, and consent is implied by their decision to apply and/or sign a contract for a loan). Furthermore, in social science scarcity often limits the total number of individuals that can receive a certain intervention, and the randomization is being done not for research purposes but rather to allocate a scarce resource fairly and with minimal risk of corruption (e.g., see ref. 18).
Hawthorne and John Henry effects are often discussed as threats to internal validity, but they each generate ethical issues as well (and not merely by reducing the social value of the research). A Hawthorne effect is generated when individuals, because they know they are in a treatment group, behave differently. A John Henry effect is the analog for a control group. A problem ensues if there is an “unlucky stigma” among individuals leading them to harbor negative psychological effects (for control group participants). The analog for a treatment group is potentially more nuanced. If, for example, a treatment participant receives a cash transfer, observing jealousy, demands for sharing, or other such effects is not necessarily a risk to internal validity but rather perhaps exactly the kind of changes in behavior that the researcher (and policymaker) seeks to learn about. The question for ethics is whether the randomization process itself generated such effects over what the transfer itself generates. This is difficult to know, empirically, but is a risk nonetheless. When such a risk is deemed likely, we suggest a discussion.
Informed Consent: Individual vs. Community.
Social norms in many developing countries, particularly in sub-Saharan Africa, raise important issues regarding individual- versus group-level consent. For example, often local norms mandate that visitors to communities first meet with chiefs and community leaders, explain their reasons for visiting the community, and in some cases provide customary gifts, such as presenting kola nuts or drinks. This level of community consent improves the security of field enumerators and researchers. This also may be necessary etiquette for the sake of future researchers, so that future enumerators are not greeted with mistrust. Community consent also could be critical for managing local security risks that may arise (and due to timing be outside the practical purview of the IRB process).
Aside from issues relating to obtaining individual versus community consent, there are other secondary-level ethical concerns that projects ought to consider in the process of obtaining community- or individual-level consent. In some settings research projects may inadvertently contribute to creating an expectant culture of participants or facilitate the corrupt behavior of local community and political leaders. For example, community leaders whose assistance may be required during the community entry or obtaining community consent for an intervention may take advantage of the project to advance their own agenda or illegally take bribes from community members in exchange for providing some service that the project may be providing for free.
Local IRBs can prove particularly useful for these issues, as they may have better information about the local expectations and norms.
Privacy, Confidentiality, and Sensitive Information.
Once data are collected, the responsibility of ensuring confidentiality of the information gathered rests on the researcher. Researchers are expected to not betray the trust of participants with respect to data management, storage, and protection of privacy. Failure to ensure the confidentiality of the participants may have serious consequences. For example, revelation of information on the health status, particularly sexually transmitted diseases such as HIV, or sexual behavior of a participant can cause harm to participants. Similarly, financial data could be used by a financial institution in ways contrary to a client’s self-interest. Biomarker data, which open the possibility of disclosure of genetic information, raise the stakes further. Confidentiality is, therefore, key to safeguarding and reducing the risk of potential stigmatization of participants, especially for a vulnerable population. In some cases, the potential harm of a lax data management system may extend beyond the individual participant to his or her extended family.
For sensitive questions (e.g., sex, health, politics, religion, etc.), discussing the specific survey methods may help both for explaining the adherence to ethical protocols but also to guide future researchers looking to ask similar questions.
Potential Harms to Field Staff.
Ironically, risks to field staff are not a consideration for most (if not all) IRBs, since they are not research participants. Yet, clearly field staff are often unsung heroes of the research effort; without high-quality data, an RCT is akin to an elaborate birthday party without guests.†† However, field work does create risks to staff. Risks range from “normal” such as road accidents or verbal abuse to extreme such as sexual harassment or political violence. Pandemics, of course, pose even further risks to communities, in which traveling staff may unintentionally worsen the spread of a disease, particularly in situations like COVID-19 where carriers are often asymptomatic.
Financial and Reputational Conflicts of Interest
Do Any of the Researchers Have Financial Conflicts of Interest with Regard to the Results of the Research? Do Any of the Researchers Have Potential Reputational Conflicts of Interest?
Financial conflicts of interest reporting policies are similar but distinct across universities, multilateral organizations, and research conduits such as the National Bureau of Economic Research or the American Political Science Association. It is important to report the ex-ante possibility of a conflict of interest, so that the research community can judge for itself whether the interpretation of the empirical results was biased in favor of the possibly competing interest. These rules become vaguer when the income source and the data source differ but are from the same industry. If earning consulting income from Bank A, and writing a paper with data from Bank B, does the Bank A consulting income need to be reported in the research about Bank B? Ultimately, researcher judgment is required to make such decisions; the goal of the structured appendix is to report potentially important issues without being a burden.
Reputational conflicts of interest are not addressed by any university, multilateral, or research organization rules that we have seen. However, for many researchers, idea promotion is a vital self-interest (whether due to altruism, reputation, or future financial remuneration). While money is an interest that is easily defined, and typically traceable via contracts and payment records, reputational self-interest is person-specific and difficult to define and trace. We thus define a reputational conflict of interest with respect to a particular paper quite broadly: when prior writing or advocacy could be contradicted by specific results in the new paper, and such contradiction would pose reputational risks to the author. Taken to an extreme, such disclosures could be exhausting (and thus then ignored). We do not recommend that. We make a key assumption: academically, there is no negative consequence to publishing work that contradicts one’s earlier work (indeed, the effect could be the opposite, to increase credibility). Rather, here, we are referring to reputation outside of academia, where a public presence has been built around prior research findings or arguments. We provide two examples from our own research.
First, for Karlan, his paper “Tying Odysseus to the Mast” (27) evaluates the take-up and impact of a commitment savings product in the Philippines. He subsequently published a paper on commitment contracts for smoking cessation. Both find generally positive impacts from the commitment devices. Subsequent to the above paper, Karlan started a for-profit company, https://www.stickk.com/, that helps individuals write commitment contracts to change some future behavior (typically weight loss, exercise, or smoking). This company has been written about frequently in the media. Karlan owned equity when he cofounded https://www.stickk.com/ but has subsequently transferred all of his equity to charitable structures and he receives no financial remuneration from https://www.stickk.com/. As such he no longer has a financial conflict of interest that is reportable per standard university rules. However, he does have a reputational conflict of interest on the topic of commitment contracts, as his work and https://www.stickk.com/ are often cited in the media as an example of successful “nudges.” A disclosure would be appropriate for Karlan on future work related to committed contracts.
For Udry, his paper “Gender, Agricultural Production, and the Theory of the Household” (28) finds that households do not achieve pareto efficiency (yet much empirical work using household data starts with such an assumption and never looks back). If Udry were to have a new paper in which he finds pareto efficiency, this would conflict with his prior finding but would not constitute in our opinion a reputational conflict of interest. Even though his earlier work is commonly cited for testing for efficiency within the household, Udry has no nonacademic reputation that revolves around that finding (and also, for the record, the other three of us are certain Udry would be quite happy to find such evidence, and then would undoubtedly try to concoct some neoclassical economic theory as to why that context yielded such a result but other contexts did not).
Finally, coauthorship with nonconflicted researchers and registration of RCTs and preanalysis plans both can serve to mitigate reputation conflicts of interest issues because they both tie the hands of the research to some extent. Neither, of course, is dispositive: The choice of specification and the choice of outcomes (timing, measurement approach, and selection of proxies) could all be biased. Also, naturally, the way results are interpreted (even with refereeing and editing at journals) can be biased. Thus, reporting such conflicts, and what steps if any were taken to mitigate could be useful.
Intellectual Freedom
Were There Any Contractual Limitations on the Ability of the Researchers to Report the Results of the Study? If So, What Were Those Restrictions, and Who Were They from?
While reporting of conflicts of interest makes potential biases transparent, parties with vested interest may have other means of controlling research output and thus creating bias. Specifically, researchers and collaborators (here we define collaborators as either implementing parties, sources of data, or funders) often sign contracts regarding funding, intervention plans, or data rights. If such contracts infringe on the intellectual freedom of the researcher, this poses harm to the credibility of the research. Failure to disclose such conditions could reduce the social value of the research, constituting an ethical transgression. As such, we have included an explicit question on this in the appendix.
The most egregious infraction is the simplest: a contract which provides an external party (funder or implementer) unilateral power to prevent publication of the results. This should be unacceptable to any researcher interested in putting forth credible research, but at a minimum if such were the rights of the partner this ought to be disclosed in any and all publications (as in ref. 29).
We consider four restrictions to be benign but still worthy of reporting for the sake of completeness: 1) permission required to report the name of the collaborating institution, 2) a comment commitment: collaborating institution can provide comments on the research output and the researchers agree to consider these comments to the best of their judgement (but are not committed to incorporating them), 3) a timing commitment: the collaborating institution has a right to see the results before others, with a specified period of time, and 4) proprietary intellectual property: the collaborating institution has developed a proprietary technology (perhaps an algorithm, for example) that is relevant for the research but can only be described broadly without revealing its inner workings (such restrictions limit the scientific value of a publication, which could pose an ethical transgression). We do not assert this to be a complete list of all acceptable restrictions, but we are unaware of any others that we consider acceptable.
A few details regarding researcher independence are important to note:
-
•
Timing matters: Intellectual freedom must be granted in advance. A clause in an agreement which states that this will be decided later is a farce to the concept of researcher independence.‡‡
-
•
Partial set of coauthors: If a subset of the coauthors on an academic paper are conflicted or lack independence (e.g., most commonly, because they are employed by the funder or the implementing organization), then this section of the appendix ought to make clear that the independent researchers had full rights to publish on their own without the conflicted coauthor.
Feedback to Participants or Communities
Is There a Plan for Providing Feedback on Research Results to Participants or Communities? If Yes, What Is the Plan? If Not, Why Not?
Informed consent ultimately is about respect for the agency of an individual as well as their privacy and property rights to their information. However, often attention to this focuses on the informed consent process. Recently more attention, but still too little, has been placed on engaging with research participants, local government, and communities after the research is complete (30).
To further the principles of respect for persons, beneficence, and justice, researchers should consider providing feedback to research participants and communities (22, 31). Providing such feedback reduces the likelihood that the participant may feel exploited by the researcher. Public awareness of research results also may be a stepping-stone toward research uptake to improve the lives of poor communities. In addition, providing feedback also improves the willingness of communities to participate in future research. Note that in many cases key gatekeepers (community leaders) may have been part of community-level consent (see above), and as such the feedback may be appropriate to such gatekeepers (when not viable to all participants).
Despite the ideal of providing such feedback, there are several practical challenges that often render this aspiration unwise. We highlight three: budget, finding people, and literacy/knowledge necessary to understand the research.
Budget.
Follow-up visits to all communities (and finding people) can be costly and in some cases these costs can outweigh the benefits of directly providing feedback. However, with cellphone data and internet access expanding rapidly, communicating results of the research to participants could be viable, e.g. posting of information on a website and sending individuals text messages, or sending prerecorded video or audio via text message.
Literacy/Knowledge Necessary to Understand the Research.
Depending on the context and the research, study participants may struggle to understand research findings. Poor understanding of research findings can generate emotional distress for research participants and affect community involvement in future studies. Therefore, providing feedback to participants should not be done solely for the purpose of ticking a box, but with genuine engagement. Furthermore, any study with nuanced results may get misused intentionally or unintentionally. Researchers must examine their context so that proper measures can be taken to enhance understanding of their results. We propose that the offer of feedback to participants be included as part of the informed consent processes for social science research, although IRBs may raise concerns if this offer created undue influence to participate (perhaps due to unrealistically optimistic beliefs about the value of that feedback). As also argued in the medical literature, this process of providing feedback must be designed to provide clear opportunity for the participant to decline receipt of results (see ref. 32).
For policy-oriented research, it may be effective to provide feedback at a much higher level than at the individual level. Policy prescriptions often are made at the community, district, or national level. Researchers should consider the level at which to provide feedback to be relevant for policy.
Providing feedback to participants may itself affect behavior. Before implementing such a process, the possible adverse consequences of providing feedback should be considered. Given the complexity of some research, a risk is that feedback is conveyed but then misconstrued, leading to detrimental actions. This concern reinforces the importance of careful communication and thoughtful consideration of the way the results may be interpreted. Also, of course, this is a question for research itself: Does provision of such feedback change later behavior?
Foreseeable Misuse of Research Results
Is There a Foreseeable and Plausible Risk That the Results of the Research Will Be Misused and/or Deliberately Misinterpreted by Interested Parties to the Detriment of Other Interested Parties?
In research settings characterized by strong imbalances of power between interested parties, there may be foreseeable, plausible risks that a powerful party might use findings from the research in ways that will harm participants or nonparticipants. Research might reveal a vulnerability of a subpopulation that can be exploited for the gain of a more powerful party. If this is the case, the researcher has an ethical obligation to mitigate these risks.
An example might be an innovation in microcredit that research shows is overall profitable for lender and borrower on average. However, consider heterogeneity. First, suppose that the microcredit innovation harms a vulnerable and identifiable minority of borrowers. Second, suppose that while the innovation leads to an increase in borrower income it has other psychological or social costs to borrowers that outweigh the income gain for at least some participants. In addition, suppose there is a foreseeable and plausible risk that for-profit lenders could misuse the “average” research results to advocate for expansion of the innovation without mitigating the deleterious impacts. This observation calls for the research to take steps to address the possibility of such misuse of the research results by considering such risks ex-ante and incorporating a measurement and communication strategy to ensure a complete and balanced reporting of the results.
In a context of a repressive or nondemocratic government, additional care is required. Location-specific research to improve digital identification services in such a context, for example, raises the possibility that the results could be used for repressive purposes and researchers should consider this possibility when considering and designing their work.
We use the phrase “foreseeable and plausible” to limit the scope of this question. Research findings can be misused in many ways that should not be considered to be the responsibility of the researcher (companies making use of the discovery of a novel behavioral pattern to sell unneeded products or the police force of a repressive government using techniques discovered in research elsewhere on making bureaucracy function more efficiently). The ethical responsibility of the researcher is to consider harmful uses of their work that are predictable and reasonably tied to the research context.
Conclusion
While much has been written on ethics of research, we perceive there to be a large empirical research gap. Broadly, we see two categories of empirical research needed: documenting contextual factors that render a particular ethical concern critical or negligible and learning field research methods to improve adherence to stated intentions.
Examples of the first category include the following: Under what circumstances does random assignment (whether to treatment or control) generate stigma that harms participants? Does promising the control group a later service change their behavior now, during the observation period of the study?
Examples of the second category include the following: How much of an informed consent is understood by participants, and what wording and content changes lead to higher or lower informed consent comprehension rates (e.g., see refs. 33 and 34)? What survey methods and staff trainings are most effective for reducing harm from data collection, e.g. from adverse emotional reactions to sensitive questions? When does conducting surveys change later behavior, i.e., a mere measurement effect (e.g., see ref. 35).
We hope that the proposed structured ethics appendix will accompany papers and spark further consideration of ethical issues in field research. We also would support inclusion of these questions, or a subset thereof, in grant applications. Furthermore, the anticipation of writing such appendices may shape design decisions. We have aimed for brevity, understanding that anything too burdensome will not be adopted. In a public comment period we received several useful suggestions to include further questions. We adopted several, but not all. For example, some suggested ex post discussion of actual harms. We decided not to include this because those are for the most part covered by IRB protocols, and we also believe the ethical decisions ought to be adjudicated blind to results; otherwise, “no harm happened” could be misconstrued as evidence of an ethical design. This risks, however, that failure to discuss realized harms may thwart improvements in ex-ante reasoning by researchers using similar methods. This may be an area that shifts, if our proposed guidelines are adopted and adapted. We also did not extend the reporting of potential harms to be reporting of potential welfare change. Although we admire the aspiration, we fear that such an inclusion would inevitably require considerable work and untenable assumptions in order to assert. Furthermore, loss aversion in both law and ethics is commonplace. Finally, we did not include a call to discuss the social value of the research with respect to potential policy. Although this is an important consideration and articulating the policy implications may improve the research-to-policy nexus, we believe this would be too burdensome if addressed more robustly than merely regurgitating points made in most paper introductions. We did not receive any suggestions for items to remove, although a common suggestion was to make this shorter to increase participation and impact.
We cannot imagine these nine questions, as posed, being perfect, and we hope the by-product of this proposal is a living document which gets updated over time as usage is observed, questions improved, and norms change. We are posting the proposed structure and seeking further comments on the Global Poverty Research Lab website at Northwestern University and will also be promoting this to peer institutions for cross-posting. We will also monitor actual usage of structured appendices, to identify any improvements authors make that can be incorporated into the guidelines.
Research ethics is ultimately self-regulated, and our goal is to create a norm of discussing these issues explicitly, both to strengthen and improve the norms and research practices and also to improve the discussion of these issues. To do so we believe that all cases ought to be discussed, not just the ones that raise questions. We hope that a norm of inclusion of a structured ethics appendix can help the research community advance.
Supplementary Material
Acknowledgments
We are grateful to the editors and reviewers, as well as Gharad Bryan, Shawn Cole, Andrew Foster, Christina Gravert, Steve Glazerman, Jessica Goldberg, Julian Jamison, Heather Lanthorn, Jeff Mosenkis, Cleo O’Brien-Udry, William Pariente, Kate Vyborny, and especially to Douglas Mackay, for comments and discussions.
Footnotes
The authors declare no competing interest.
This article is a PNAS Direct Submission. P.J.F. is a guest editor invited by the Editorial Board.
*There is a robust debate on the meaning and importance of clinical equipoise for ethical medical trials (see 6, 7). Is there a distinction in the role of doctors as researchers and as physicians providing treatment? Or do the ethical principles governing medical therapy apply to clinical research (8)?
†A salient example of an instance where there is not scarcity is the policy decision about how a given program of cash grants for child welfare is to be transferred to a married couple. Should the grant be given directly to one or the other or both together? Or divided between the two? If there was a consensus among experts that providing child care grants to the female of a couple improved child welfare more than providing the grant to the man, without other adverse consequences, then randomization to provide grants to men would not have a scarcity justification (and also would violate policy equipoise).
‡The growing selective trials literature provides a promising path forward on capturing local information (12).
§Policy equipoise could be restored if the control group were replaced by an experimental arm expected to be as effective or better as the CCT.
¶Naturally “most affected” is undoubtedly mismeasured and not known with uncertainty; as such this could be done probabilistically, in which individuals’ likelihood of assignment to treatment is a function of estimated prioritization. This maintains the ability to test impact throughout while still adhering to this ethical principle.
#See Glennerster and Powers (15) for a robust discussion of this distinction and its implications, and some further helpful examples.
||McDermott and Hatemi (22) provides additional examples.
**Example of subquestions to consider as part of the broad question: Are there any risks that could ensue because of the data collection process or storage, e.g., discomfort to being asked certain questions or breach of confidentiality? If so, what are the mitigation strategies? Are there costs to the participant for the data collection process, such as their time, and if so, what is the strategy or rationale for offsetting this cost? Because these are all issues covered by most IRB processes, a sufficient explanation for a “yes” response may be to provide the IRB approval numbers for all IRBs that have approved the project. However, if there are particular issues that are important to discuss, please do so here.
††Analogy courtesy of Christina Gravert, with the follow-on explanation: “Tons of work to set up and then makes you cry out of disappointment.” An alternative and subtler analogy, courtesy of Andrew Foster: “A compass in Antarctica”.
‡‡In our collective experiences, we have only encountered such a request once (specifically, the assertion was “to decide researcher independence later, after results are in”). The individual arguing that this met a reasonable standard of researcher independence led a large, corporate “philanthropy” effort, and subsequently left their post at the corporation to take a high-ranking position with the Trump administration.
This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2024570118/-/DCSupplemental.
Data Availability
There are no data underlying this work.
References
- 1.Sen A., Commodities and Capabilities (Elsevier, 1985). [Google Scholar]
- 2.Rawls J., “Social unity and primary goods” in Utilitarianism and Beyond, Sen A., Williams B., Eds. (Cambridge University Press, 1982), pp. 159–186. [Google Scholar]
- 3.London A. J., “Equipoise: Integrating social value and equal respect in research with humans” in The Oxford Handbook of Research Ethics, Iltis A. S., MacKay D., Eds. (Oxford University Press, 2020). [Google Scholar]
- 4.Slough T., et al., Adoption of community monitoring improves common pool resource management across contexts. Proc. Natl. Acad. Sci. U.S.A. 118, e2015367118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Freedman B., Equipoise and the ethics of clinical research. N. Engl. J. Med. 317, 141–145 (1987). [DOI] [PubMed] [Google Scholar]
- 6.Gifford F., So-called “clinical equipoise” and the argument from design. J. Med. Philos. 32, 135–150 (2007). [DOI] [PubMed] [Google Scholar]
- 7.Chiong W., The real problem with equipoise. Am. J. Bioeth. 6, 37–47 (2006). [DOI] [PubMed] [Google Scholar]
- 8.Miller F. G., Brody H., Clinical equipoise and the incoherence of research ethics. J. Med. Philos. 32, 151–165 (2007). [DOI] [PubMed] [Google Scholar]
- 9.MacKay D., The ethics of public policy RCTs: The principle of policy equipoise. Bioethics 32, 59–67 (2018). [DOI] [PubMed] [Google Scholar]
- 10.MacKay D., Government policy experiments and the ethics of randomization. Philos. Public Aff. 48, 319–352 (2020). [Google Scholar]
- 11.Barrett C. B., Carter M. R., The power and pitfalls of experiments in development economics: Some non‐random reflections. Appl. Econ. Perspect. Policy 32, 515–548 (2010). [Google Scholar]
- 12.Chassang S., Padró G., Miguel I., Snowberg E., Selective trials: A principal-agent approach to randomized controlled experiments. Am. Econ. Rev. 102, 1279–1309 (2012). [Google Scholar]
- 13.Karlan D., Osei R., Osei-Akoto I., Udry C., Agricultural decisions after relaxing credit and risk constraints. Q. J. Econ. 129, 597–652 (2014). [Google Scholar]
- 14.Glewwe P., Park A., Zhao M., A better vision for development: Eyeglasses and academic performance in rural primary schools in China. J. Dev. Econ. 122, 170–182 (2016). [Google Scholar]
- 15.Glennerster R., Powers S., “Balancing risk and benefit: Ethical tradeoffs in running randomized evaluations” in The Oxford Handbook of Professional Economic Ethics, DeMartino G. F., McCloskey D., Eds. (Oxford University Press, 2016), pp. 366–401. [Google Scholar]
- 16.Bryan G., Choi J. J., Karlan D., Randomizing religion: The impact of protestant Evangelism on economic outcomes. Q. J. Econ. 136, 293–380 (2020). [Google Scholar]
- 17.Coville A., Galiani S., Gertler P. J., Yoshida S., “Enforcing payment for water and sanitation services in Nairobi’s slums” (NBER Working Paper 27569, National Bureau of Economic Research, Cambridge, MA, 2020).
- 18.Angrist J., Bettinger E., Bloom E., King E., Kremer M., “Vouchers for private schooling in Colombia: Evidence from a randomized natural experiment” (NBER Working Paper 8343, National Bureau of Economic Research, Cambridge, MA, 2001).
- 19.Chattopadhyay R., Duflo E., Women as policy makers: Evidence from a randomized policy experiment in India. Econometrica 72, 1409–1443 (2004). [Google Scholar]
- 20.Eyal N., Lipsitch M., Bärnighausen T., Wikler D., Opinion: Risk to study nonparticipants: A procedural approach. Proc. Natl. Acad. Sci. U.S.A. 115, 8051–8053 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Sharp R. R., Foster M. W., Community involvement in the ethical review of genetic research: Lessons from American Indian and Alaska Native populations. Environ. Health Perspect. 110 (suppl. 2), 145–148 (2002). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.McDermott R., Hatemi P. K., Ethics in field experimentation: A call to establish new standards to protect the public from unwanted manipulation and real harms. Proc. Natl. Acad. Sci. U.S.A. 117, 30014–30021 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ashraf N., Field E., Lee J., Household bargaining and excess fertility: An experimental study in Zambia. Am. Econ. Rev. 104, 2210–2237 (2014). [Google Scholar]
- 24.Bursztyn L., Cantoni D., Yang D. Y., Yuchtman N., Zhang Y. J., Persistent political engagement: Social interactions and the dynamics of protest movements. Am. Econ. Rev. Insights 3, 233–250 (2021). [Google Scholar]
- 25.Miguel E., Kremer M., Worms: Identifying impacts on education and health in the presence of treatment externalities. Econometrica 72, 159–217 (2004). [Google Scholar]
- 26.Hewlett S. E., Is consent to participate in research voluntary? Arthritis Care Res. 9, 400–404 (1996). [DOI] [PubMed] [Google Scholar]
- 27.Ashraf N., Karlan D., Yin W., Tying Odysseus to the mast: Evidence from a commitment savings product in the Philippines. Q. J. Econ. 121, 635–672 (2006). [Google Scholar]
- 28.Udry C., Gender, agricultural production, and the theory of the household. J. Polit. Econ. 104, 1010–1046 (1996). [Google Scholar]
- 29.EGAP , EGAP research principles. https://egap.org/wp-content/uploads/2020/05/egap-research-principles.pdf. Accessed 15 March 2021.
- 30.Friedlander S., et al., “Sharing research results with participants: An ethical discussion” (MIT Poverty Action Lab, 2021).
- 31.Fernandez C. V., Kodish E., Weijer C., Informing study participants of research results: An ethical imperative. IRB 25, 12–19 (2003). [PubMed] [Google Scholar]
- 32.Rigby H., Fernandez C. V., Providing research results to study participants: Support versus practice of researchers presenting at the American Society of Hematology annual meeting. Blood 106, 1199–1202 (2005). [DOI] [PubMed] [Google Scholar]
- 33.Wade J., Donovan J. L., Lane J. A., Neal D. E., Hamdy F. C., It’s not just what you say, it’s also how you say it: Opening the ‘black box’ of informed consent appointments in randomised controlled trials. Soc. Sci. Med. 68, 2018–2028 (2009). [DOI] [PubMed] [Google Scholar]
- 34.Mills N., et al., Training recruiters to randomized trials to facilitate recruitment and informed consent by exploring patients’ treatment preferences. Trials 15, 323 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Zwane A. P., et al., Being surveyed can change later behavior and related parameter estimates. Proc. Natl. Acad. Sci. U.S.A. 108, 1821–1826 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
There are no data underlying this work.