Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Aug 22.
Published in final edited form as: Am J Bioeth. 2010 Mar;10(3):W1–W3. doi: 10.1080/15265161003708508

Response to Open Peer Commentaries on “Community Members as Recruiters of Human Subjects: Ethical Considerations”

Christian Simon 1, Maghboeba Mosavel 2
PMCID: PMC3159565  NIHMSID: NIHMS304031  PMID: 20229401

Both Constantine (2010) and Fry (2010) emphasize key discussion and debate in the public health literature that we missed. We are also gratified to learn that Molyneux, Kamuya, and Marsh (2010) have identified in their impressive work in Kenya similar issues with respect to employing research staff who are embedded in the community. We concur in principle with their recommendation that community-based research staff should be “professionalized,” perhaps much in the same way as staff in practice-based research networks (PBRNs) and in other entities in the US increasingly are. Of course, any move to professionalize research staff will need to address issues such as associated legal obligations, finite funding, and the temporality of research and researcher presence. There is also the question of whether the process of transforming community fieldworkers into professional researchers may diminish the authenticity of the insights and lessons that community members have to offer as a result of their nonprofessional “insider” status relative to the research. These and other possibilities need to be explored before efforts are undertaken to permanently professionalize the role of community-based researchers.

Landy and Sharp (2010) provide a nice extension of some of our ideas with their notion of “vertical” and “horizontal exploitation.” Among other uses, the idea of horizontal exploitation draws attention to the potential for community-embedded research intermediaries to advance their own interests or agenda through the portal of research. We did not find direct evidence of this in our South African work. Further, our research staff shared many characteristics with the research subjects they were asked to interact with, including relative age, gender, economic status, and cultural background. These shared characteristics do not shut out the possibility that research staff have interests dissimilar to or in conflict with those of subjects, yet, one reason for matching them on the basis of these characteristics is to promote mutual understanding and trust. Nonetheless, Landy and Sharp provide some realistic hypothetical examples of horizontal exploitation that give one pause. Our one concern is that they weight prospects heavily on the side of exploitation. Through the unique activities and levels of access research provides, community-embedded recruiters are potentially able to engage their social and community networks in ways that need not be exploitative, that align with the accepted ethics of conducting human subject research, and that may be other-oriented toward the welfare of their peers and community at large. In her commentary, Anderson (2010) points to just such possible benefits of having community-based organizations (CBOs) involved in recruitment. One can also imagine patient advocacy groups—the negative example Landy and Sharp give—exerting positive, justice-driven “horizontal influences” on the recruitment process, such as helping in the representative identification of potential research subjects and distribution of IRB-approved announcements for the research. Among other entities, patient advocacy groups could also exert positive “upward vertical influence” on researchers by capturing and amplifying patient-level concerns that may otherwise escape the researcher. Of course, as both the commentaries from Landy and Sharp and Anderson make clear, there is plenty of potential for coerciveness and exploitation in these intermediated relationships. Yet, it is not a given that this potential will be realized and steps can be taken to minimize this from happening. Thus, while we find their argument plausible and important, we would urge Landy and Sharp to adopt more judgment-neutral terminology—“influence” as opposed to “exploitation” comes to mind—which does greater justice to the multidirectional processes and effects implied by their concept.

Constantine provides a useful practitioner’s review of the more technical and nuanced components of respondent-driven sampling (RDS). Yet, we are struck by her out-of-the-gate criticism that we “provide no contextual basis” for RDS. The broader context for our evaluation of peer-driven recruitment as well as much of our self reflection on research methodology in general includes the deep tension inherent in efforts to conduct scientific research in developing-world communities struggling with the challenge of everyday survival. Apart from our making of this point in the paper, the personal narratives of the research staff involved in our project surely render this context vividly evident. Incidentally, this context of our research also leads us to take issue with Constantine’s (and other RDS proponents’) claim that the monetary incentives often used in RDS act simply as “symbolic motivators.” Having worked for over two decades in a spectrum of communities where more adults are unemployed than employed, where pasteurized milk and toothpaste are considered out-of-reach luxuries, and where children are routinely mugged for the pittances they may carry in their pockets, we are dismayed to learn that money can be viewed as having a purely symbolic value. The arbitrariness of current IRB and best practice cut-off points for respondent remuneration is nowhere better illustrated than in very resource-poor settings, where poverty and basic need leave no room for abstractions, no matter how small the cash amount.

To pick up on some of Constantine’s other claims. As behavioral health researchers with social science training we appreciate the deep theoretical roots and utility of RDS. However, we are not convinced that RDS has managed its associated ethical issues as effectively as Constantine implies. For example, the use of coupons as Constantine describes it may amount to a privacy protection, but provide no reassurance that coupon providers (or “referrers”) will not unduly influence members of their peer or social networks in deliberating study entry. Constantine moves directly from her discussion of coupons to the claim that RDS spares potential respondents from the “invasive” nature of other recruitment methods. She writes that RDS seems “more consistent with the spirit of voluntariness in research participation” than these other methods. These claims are unsubstantiated. Brief as her commentary necessarily is, Constantine appears unwilling to even consider the ethical implications of the two elements she describes as central to RDS: (1) that referrers themselves are/have been participants in the research they are seeking referees for, and, (2) that this search for referees involves distributing a coupon or coupons to “someone they know.”

The key question these two elements raise is, what reassurance can be provided that RDS does not compromise voluntariness given experiential knowledge on the part of the referrer of both the research (having participated in it) and the referee (being someone they know)? In other words, how do referrers keep from unduly influencing the process of referral with any number of accumulated (anthropologists would say embodied) opinions, attitudes, or biases emerging from their participation in the research, on the one hand, and from their personal, social, or other ties to the referee, on the other? The use of coupons provides no reassurance on this score. Other strategies such as educating referrers in the basics of research ethics and the need for neutrality in recruitment may help counteract the potential for undue influence. However, whether these strategies will gain traction on the smooth surface of interpersonal and social interaction that may stretch back for many years is an unanswered question.

There is also a potential third category of experience or knowledge that is unaccounted for, namely what the referee knows about the referrer and how this knowledge may sway their decision to participate or not participate in the research. Is the referrer someone they greatly respect or trust? As Bean and Silva (2010) point out in their excellent commentary on proximity in clinical research, trust in the clinician-investigator can add value to a consent and recruitment process. As we point out in the paper, this may be the case in RDS too. However, trust is also a ready foil for undue influence or coercion. In clinical research, there are established safeguards to minimize abuses of trust, notably the fiduciary relationship between clinician-researcher and prospective research subject. As Bean points out, this safeguard is notably absent from the peer-recruitment process. If a study could be designed and implemented to test for the power of proximity and undue influence in RDS by robustly exploring the effects of proximity, trust, and other key variables (a possible design for which we described in the paper), the emerging data might show that there is little to no such distinctive power to RDS strategies. Until we have such robust data, however, any claim to RDS’ superior noninvasiveness or spirit of voluntariness should be made with a great deal of humility.

Phillips (2010) illuminates from a historical perspective some of the reasons why RDS and similar approaches are cause for concern. His commentary provides a compelling glimpse into the potential pitfalls of mutual familiarity and trust by reviewing some of the details of Nurse Rivers’ relationship with the participants of the Tuskegee Syphilis Experiment. As Phillips points out, the conflicting dual role of Nurse Rivers as community peer and study employee was one among many contributing factors that compromised the informed consent process for the study. Phillips refer to the ‘Tupperware syndrome,’ which may unfold in a fashion in research similar to the moral compulsion that may drive people to participate in Tupperware parties and make a purchase they would otherwise not make. While RDS is not simply another form of direct or network marketing—the ultimate goals of each are fundamentally different, for one—RDS is confronted with dynamics of social proximity and moral obligation akin to those that motivate people to attend Tupperware parties and to rarely leave empty handed. Efforts to promote privacy in RDS are laudable, but should not be confused with the potential for RDS to simulate the subtle coerciveness of socially-embedded marketing events such as Tupperware parties. That is, protecting referee privacy in RDS is tantamount to ensuring that people travel to the Tupperware party unrecognized by the company’s marketers (in RDS, the researchers). But the real conundrum is sorting out and trying to manage what drives people to the event in the first place. Is it altruism? A need for more plastic ware? Or is it social influencing that is the defining characteristic of the event itself, such that, if the host of the party were not known to the attendants and had not asked them to attend, far fewer folks, if any, would show up at the event?

Another concern in research is, of course, that what people carry away from these networking events is not as benign (or necessarily as useful) as Tupperware. People may be harmed in a variety of ways by research, including physically, socially, emotionally, and psychologically. In the case of RDS research that presents minimal, if any, potential for such harm one might ask whether sampling processes such as RDS are being held to a standard disproportionate to the level of harm involved. RDS is commonly used in behavioral research that is typically minimal risk. Yet RDS potentially can be used in the recruitment of subjects to more-than-minimal risk research as well. In this case, the question of whether and how significantly RDS relies on situational contexts of influence takes on more profound implications, including the potential for people to submit to physical, social, or other harms without adequately deliberating these implications beforehand. To establish whether RDS and similar approaches actually do promote or minimize this potential, we reiterate a concluding recommendation we made in our paper, namely that robust comparative studies are needed to evaluate the ethics, and not just the utility, of these approaches. The ethnographic and other accounts that Fry references are important in their own right, but leave ultimately unanswered the issues raised in our paper (and this commentary). What is needed are compelling data to quell the debate on these issues so that the “fervor” (Fry’s word) with which RDS and similar approaches are being adopted can be justifiably stoked or, alternatively, scaled back.

Many proponents and skeptics of RDS and similar approaches share the ultimate goal of reaching marginal and disempowered groups and communities and generating knowledge that has relevance to their future wellbeing. A body of “comparative effectiveness” data focused on key ethical issues such as the effect of proximity on voluntariness (a measurable prospect) may go some way in harmonizing the traffic direction on the way to this goal.

Acknowledgments

We thank the authors of all seven commentaries for their insightful reflections on our article, “Community members as recruiters of human subjects: Ethical considerations.”

Contributor Information

Christian Simon, Program in Bioethics and Humanities, School of Medicine, University of Iowa.

Maghboeba Mosavel, Department of Social and Behavioral Health, Virginia Commonwealth University.

References

  1. Anderson EE. The role of community-based organizations in the recruitment of human subjects: Ethical considerations. American Journal of Bioethics. 2010;10(3):20–21. doi: 10.1080/15265161003599667. [DOI] [PubMed] [Google Scholar]
  2. Bean S, Silva DS. Betwixt & between: Peer recruiter proximity in community-based research. American Journal of Bioethics. 2010;10(3):18–19. doi: 10.1080/15265160903581783. [DOI] [PubMed] [Google Scholar]
  3. Constantine M. Disentangling methodologies: The ethics of traditional sampling methodologies, community-based participatory research, and respondent-driven sampling. American Journal of Bioethics. 2010;10(3):22–24. doi: 10.1080/15265160903585628. [DOI] [PubMed] [Google Scholar]
  4. Fry CL. Ethical implications of peer-driven recruitment: Guidelines from public health research. American Journal of Bioethics. 2010;10(3):16–17. doi: 10.1080/15265160903585610. [DOI] [PubMed] [Google Scholar]
  5. Landy DC, Sharp RR. Examining the potential for exploration by local intermediaries. American Journal of Bioethics. 2010;10(3):12–13. doi: 10.1080/15265160903585586. [DOI] [PubMed] [Google Scholar]
  6. Molyneux, Kamuya SD, Marsh V. Community members employed on research projects face crucial, often under-recognized, ethical dilemmas. American Journal of Bioethics. 2010;10(3):24–26. doi: 10.1080/15265161003708623. [DOI] [PubMed] [Google Scholar]
  7. Phillips T. Protecting the subject: PDR and the potential for compromised consent. American Journal of Bioethics. 2010;10(3):14–15. doi: 10.1080/15265160903585602. [DOI] [PubMed] [Google Scholar]
  8. Simon C, Mosavel M. Community members as recruiters of human subjects: Ethical considerations. American Journal of Bioethics. 2010;10(3):3–11. doi: 10.1080/15265160903585578. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES