Abstract
Objectives. We assessed the effectiveness of various systems of community participation in ethical review of environmental health research.
Methods. We used situation analysis methods and a global workspace theoretical framework to conduct comparative case studies of 3 research organizations at 1 medical center.
Results. We found a general institutional commitment to community review as well as personal commitment from some participants in the process. However, difficulty in communicating across divides of knowledge and privilege created serious gaps in implementation, leaving research vulnerable to validity threats (such as misinterpretation of findings) and communities vulnerable to harm. The methods used in each collaboration solved some, but not all, of the problems that hindered communication.
Conclusions. Researchers, community spokespersons, and institutional review boards constitute organizational groups with strong internal ties and highly developed cultures. Few cross-linkages and little knowledge of each other cause significant distortion of information and other forms of miscommunication between groups. Our data suggest that organizations designed to protect human volunteers are in the best position to take the lead in implementing community review.
It is well established that research can pose risks to participants. In recent years it has been recognized that research can also pose a threat to communities, because what individuals say when surveyed may be inappropriately generalized to their entire community. To protect communities from these and other potential harms, a new ethical principle, respect for communities, was established.1 Community review of research is intended to protect against the collective harms that are a particular risk of environmental health research2 and that are especially important in historically marginalized communities that have borne disproportionate burdens of both environmental degradation and ill-considered research.3
Despite the ethical and scientific benefits of such a review, its implementation is piecemeal, with researchers, citizens, and community-based organizations struggling to achieve this oversight. Wallace et al. suggest that this problem can be understood from the perspective of global workspace theory, which posits that organizations are composed of cognitive work groups, systems designed to generate information and use it in making choices, decisions, appropriations, sanctions, and evaluations, among other tasks.4–6 These work groups are internally organized teams that are externally linked to one another to create a larger system of distributed cognition. To be efficient, these interdependent teams must function both collectively and individually and must exchange information as rapidly and with as little distortion as possible. In the institution we studied, university researchers, community representatives, and ethical review boards were work groups whose separate but interrelated efforts formed a diffuse system that addressed the ethics of environmental health research.
Wallace et al.'s model notes 3 obstacles to work group functioning: inattentional blindness, rate distortion, and policy or ideology.4–6 Inattentional blindness is inherent in all observations, because it is impossible to take in all the information available in a situation. Rate distortion is a fundamental property of all information exchange: information travels through specific established channels, a process that is efficient but that inevitably causes some content to be lost or distorted, thus limiting the potential for innovation to emerge from the collaboration. Policy and ideology inform the starting assumptions that affect what people are able to see, hear, and use when confronted with new information; resource allocation and cultural constraints strongly influence this obstacle.
We used Wallace et al.'s model as a tool to examine community ethical review for a variety of research projects involving environmental health. Although all of the projects shared an interest in human health outcomes, their methods ranged from molecular analysis of biological samples to in-depth qualitative research in specific neighborhoods. Studies were longitudinal and cross-sectional and conducted in various locations and populations. We defined the process of review as a collective inquiry in which researchers, clinicians, and community representatives constitute work groups that must create a functioning, shared workspace in which the review of environmental health research is conducted. Although intragroup dynamics such as leadership and facilitation are commonly understood to influence collaborative efforts, our global workspace approach was directed more broadly at systemic, intergroup issues. Here we present examples of the obstacles to efficient distributed cognition seen in an environmental health research workspace.
We used situation analysis to assess the interactions among 3 work groups at 1 medical center in a major US city. We examined these interactions at each of the 6 steps of the research process: posing a question, designing a study to answer the question, obtaining institutional review board (IRB) approval, collecting data, analyzing data, and disseminating findings. Because research that is developed without local input,7 or whose findings are inadequately communicated to participants, can harm volunteers and their communities, the conduct of each stage has ethical ramifications. In line with Wallace et al.'s theory that contexts shape cognition, we also examined the context of the work groups.
METHODS
We used situation analysis methods in comparative case studies of 3 organizations undertaking environmental health research. Widely used in research in health, business, and other subjects,8–11 situation analysis examines both a selected interpersonal episode and its context.12,13
The study of situations is based on 80 years of research in sociology, psychology, and other fields.14–17 Key aspects of situations include the goals of the key players, the elements of the social and behavioral repertoires of the players, the roles players fill, the rules governing interactions, the skills needed to fill the roles and follow the rules, the difficulties people face in carrying out their roles, the environmental setting, and the social and cultural concepts used to describe interactions.14
The examination of context entails defining and describing the situation's social systems.13 The assumption is that these larger social systems influence and constrain the unfolding of the interpersonal episode that is being observed.18–22
Participants
Three research organizations within a large university setting agreed to participate in the study. Each was involved with environmental health research and had some mechanism for gathering input from communities that might be affected by the research. They differed on the points in the research process at which they engaged in community review (Table 1). We designated these organizations the IRB, the Patchwork Center, and the Cohort Center because of their different orientations to community review.
TABLE 1.
Proposal of Research Question | Development of Protocol | Approval of Protocol by IRB | Data Collection | Data Analysis | Dissemination of Findings | |
IRB | X | |||||
Patchwork Center | X | X | X | X | X | X |
Cohort Center | X | X | X |
Note. IRB = institutional review board. ‘X’ indicates a stage in which community representatives were involved. Patchwork Center and Cohort Center are pseudonyms for 2 research groups within the medical research center.
Center directors introduced our research team to their colleagues. Individuals were approached by research team members, who obtained informed consent. Approximately 60 people were observed or interviewed, of whom 60% were from the medical center and 40% from the community. All participants were older than 18 years, and they varied in gender, age, and ethnicity. Most had at least a college education, and many had advanced degrees. All were involved in some way with research being conducted by the participating organizations.
Data Collection and Analysis
We conducted 4 types of data collection: semistructured interviews with people in different roles (e.g., community representative, university researcher, IRB member), observations of the interactions of people in different roles, observations of the larger setting around the university, and compilation of archival materials and ephemera (such as conference brochures, meeting minutes, and mailings) to provide background.
We observed participants in a variety of settings, such as one-on-one interviews, meetings, and conferences. We interviewed 15 people individually; each interview lasted 45 to 90 minutes. Interviews were audiotaped, and précis were prepared by a research team member other than the interviewer. We observed group processes at meetings that lasted a minimum of 2 hours; some sessions continued over several days. Observers took notes during the meetings from which final versions were written up later.
Research team members attended the Environmental Protection Agency's 2003 meeting of Children's Environmental Health Centers, the National Institute of Environmental Health Sciences–National Institute of Occupational Safety and Health 2004 environmental justice meeting, and the 2006 Annual Meeting of Public Responsibility in Research and Medicine to deepen our understanding of the context within which the 3 work groups functioned. Programs and publications, session transcripts, Web sites, and researchers' field notes from these meetings helped elucidate the knowledge, language, and rules of each work group and their influence on reviews of research.
We used standard qualitative methods (creating précis of interviews and other data, coding observations and interviews, and identifying key themes)23,24 in data analysis to describe the organizational structure of the groups and the problems that arose in interactions between representatives of different groups. We focused particularly on the ways inattentional blindness, rate distortion, and policy and ideology affected the interactions we learned about or observed. We rigorously examined the connections between the context and situation throughout the analysis, as recommended by Mitchell.13 Research team members reviewed the materials and contributed to revising project reports.
RESULTS
We observed distinct divisions between the local community, which was a poor community of color, and the large, resource-rich university setting, which was dominated by White scientists. An important, but more subtle, divide existed between the IRB and the other 2 groups we studied: the IRB was not a peer to the 2 research centers, but rather their superior, because it had the power to stop all research activities if ethical standards for research were not met.
Language played a significant role in the divisions between local and university communities. The dominant language of meetings was science, and time constraints limited the translation of this language needed to ensure equal participation in the process by all participants. Members of the local community—whose asset was their perspective from outside the university–research complex—were not enabled to introduce their language or ideas into the work groups.
Science funding depended on successful grant applications. Despite the university's healthy endowment, continued wealth required a steady output of research presentations and scientific publications. Building community partnerships was time consuming because it required bridging social divisions. This use of time was not perceived as “production.” Indeed, in the course of our observations of the larger context, we met scientists who worked at a center that had superb community relations but who did not win a competitive funding renewal. One study participant used the term grant plantation to describe the drive for knowledge production disconnected from investment in social relationships.
Institutional Review Board
The IRB was charged with ensuring that research protocols were not conducted until they contained adequate protections for human participants. According to the IRB's procedure manual,
Membership is selected to assure appropriate diversity, including representation by multiple professions, multiple ethnic backgrounds, and both genders, and to include both scientific and non-scientific members.
During our study, the chair of the board we observed was responsible for recruiting new members and ensuring the diversity of the group.
University reviewers were senior research scientists or clinicians trained in a variety of disciplines. Lay reviewers had no formal affiliation with the university other than their position on the IRB. The number of community spokespersons was left to the discretion of the chair of the board. In general, what community they should represent was not specified. In certain cases—studies involving prisoners, for example—the IRB was required to include someone with specific ties to the population in question. The lay reviewers on the committee we studied had at least an undergraduate degree. They had full voting power.
The IRB we observed met every other week for at least 3 hours to review 15 to 20 proposals submitted by principal investigators. Proposals were submitted on a standard form that required a description of the study and its possible risks and benefits. Each protocol was carefully reviewed by 1 committee member, who then introduced it at a meeting. The IRB's decisions were limited to either approving a study or blocking it because of serious reservations about its appropriateness. Often, a proposal's minute technical details were unintelligible to the entire group, adding to the intense time pressure members felt. Lay reviewers reported in interviews that it was sometimes difficult to clarify technical issues.
Lay reviewers played a critical role in reviewing consent forms. The consent form is the embodiment of ethical research, and all parties took the lay reviewers' contribution seriously. We observed lengthy discussions of draft forms submitted by applicants. Yet the massive number of protocols to be reviewed subjected the process to severe rate distortion.
Patchwork Center
The Patchwork Center was founded by a researcher who was committed to full community participation in research and who endorsed community review at every step of the research process. The center sought to incorporate community views into its various environmental health studies, which shared facilities and equipment. At the Patchwork Center's inception, a prominent local organization with a reputation for activism and advocacy was invited to be its community adviser. The organization's expertise complemented the interest of Patchwork Center staff in conducting research in local neighborhoods. “Both sides need each other,” explained the director of the Patchwork Center. “The researchers obviously need the participation of the community or they won't have their research, and hopefully the community sees a positive outcome for the research that is being done.”
Yet despite the director's commitment to the principle of community consent and an awareness of the many ways in which a partnership could be mutually beneficial, the Patchwork Center investigators developed relationships with community spokespersons at their own discretion and—to some extent—their own peril. Researchers who worked closely with community partners found that it took a great deal of time, often time they could ill afford. A senior researcher told us that she and her colleagues realized the value of working with local groups but still found it difficult to accomplish. As she put it,
We are all way too busy. We have way too much to do. Especially here where if you don't get grants you are out on the street. Even though I have tenure, if I don't have grants I don't have a salary. There is a lot of pressure.
Continued funding required regular publication in peer-reviewed journals and demonstrable progress in a research agenda—goals easily derailed by incorporating others' feedback into an investigator's work.
Thus, the level of participation by laypeople varied greatly across the center's projects, and there were no formal systems for incorporating input from local residents, study participants, or other interested parties. The predominant force in researchers' decision making was the publish-or-perish demands of academia. Some researchers solved time problems by eliminating time-consuming conversations, yet this increased inattentional blindness by narrowing the field of observation and discourse.
Cohort Center
A cohort study led to the formation of the Cohort Center. Because of federal funding imperatives to support community–university partnerships, the center included in its objectives a general mission of outreach to the neighborhood. Community representation was incorporated in 2 ways: via a community advisory board that met once a year and a local advocacy group that undertook some of the outreach work of the Cohort Center and participated in center leadership. Because these community representatives were brought into the process after the start of the cohort study, their review was limited to the later stages of the research process, such as suggesting future research topics and funding sources.
We observed 2 interactions that involved policy and ideology issues. The first involved the use of data describing a complex local history of segregation and disinvestment that had differentially affected the neighborhoods from which the cohort was drawn. A policy-informed geographic analysis based on these local histories proved to be a robust model for explaining the variation in key health outcomes from the cohort study. Notably, this analysis found that people of different ethnicities who resided in the same areas had similar health outcomes. Despite these findings, the center leadership (who were predominantly White) made race/ethnicity the primary axis of analysis, ignoring neighborhood characteristics.
To our knowledge, participating community spokespersons (drawn from communities in which non-White people predominated) did not object to the focus of the analysis. Our interviews and observations suggested that they may have lacked the time to fully review the center's academic work and the technical expertise to effectively critique it. After completion of the study, some community advisers and university affiliates stated in interviews that the community advisers had been misled into rubberstamping a project that had effectively used their neighborhoods as a tool for building the center's reputation for welcoming community input, an important factor in future grant applications. These critics charged that the community group's endorsement provided researchers with the appearance—but not the substance—of complying with funding agencies' goals of community–university collaboration.
We also observed the Cohort Center's dissemination of information to its study participants. Center staff created a newsletter that was deemed by the community advisory board to be too technical to be understood by its intended recipients. This problem was resolved by a team that included community and university members. The team worked with designers, a literacy expert, and focus groups of study participants to develop a newsletter that reported study findings in popular language. This challenging and time-consuming process attended to various parties' concerns about reading level, scientific accuracy, and the social context of study findings. This effort, which required great investments of resources by all parties, helped to build trust and a sense of ownership of the research across the community–university divide. It also produced a first-class publication that set a new standard for work at the university.
DISCUSSION
Our situation analysis suggests that community review of research is hampered by major obstacles to the flow of information. Although we focused on just 1 type of obstacle for each work group, important problems existed throughout the workspace in all 3 domains described by Wallace et al.: inattentional blindness, rate distortion, and policy and ideology. The context of the situation we analyzed comprised 3 groups with highly divergent agendas; limited resources of money, education, and time; and unresolved racially inequitable practices (both new acts of inequity and the continuing effects of previous acts).25 We observed that communication was inadequate and that contextual constraints made improvement in intergroup communication difficult.
These gaps in communication are important because errors made in ethical review arise from errors in communication. This relationship between error and effective communication is well established for other work groups—an example is change-of-shift knowledge transfers between emergency department staff members.26
We compared 3 work groups, each of which had slightly different problems. For example, in the IRB, community spokespersons could vote—they had real power. But the existence of real power heightened the inequity that arose from their limited knowledge of science. Full engagement with communities solved the problem of incorporating local knowledge and oversight but increased the strain on researchers for whom time equaled productivity, and productivity (i.e., presentations and publications) equaled grants. Full participation ultimately threatened scientists' survival in a system geared toward rapid results. Changing the review model might merely transfer the weight of problem-solving from 1 domain to another.
The solution to the problem of community review must address the complex structural issues affecting communication among diverse groups: specifically, lack of time, lack of shared scientific knowledge, and lack of respect for community spokespersons' knowledge. We suggest 3 interventions. First, scientists, ethicists, and community representatives should be informed about one another's languages and perspectives. Universities should offer free training in science for members of their local communities and should hire community members or organizations to teach scientists about local issues. Second, the process should be slowed so that reviewers have adequate time to gather knowledge and make informed decisions. Third, challenges to policy and ideology should be accorded new respect, because they may be critical to good science.
How are such massive changes in the conduct of science to be implemented? Our findings offer a solution. As we came to appreciate in the course of our fieldwork, IRBs supersede research centers: public health and medical science cannot proceed without their approval. Historically, protections for human participants arose as a response to racist abuse by the Nazis and others. Today, IRBs have the authority and the moral imperative to act against the productivity pressures of the research system and for the good of research participants and their communities. Their efforts to create new paradigms for thorough communication could transform the process of collaboration between researchers and their communities.
Acknowledgments
We are grateful for the administrative and moral support of our colleagues at the Community Research Group and to the participants who agreed to be involved in the study.
Human Participant Protection
This study was approved by the New York State Psychiatric Institute institutional review board and the Western Institutional Review Board.
References
- 1.Weijer C. Protecting communities in research: philosophical and pragmatic challenges. Camb Q Healthc Ethics 1999;8:501–513 [DOI] [PubMed] [Google Scholar]
- 2.Strauss RP, Sengupta S, Quinn SC, et al. The role of community advisory boards: involving communities in the informed consent process. Am J Public Health 2001;91(12):1938–1943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Quinn SC. Ethics in public health research: protecting human subjects: the role of community advisory boards. Am J Public Health 2004;94(6):918–922 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Wallace R. A Global Workspace perspective on mental disorders. Theor Biol Med Model 2005;2:49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Wallace R. Culture and inattentional blindness: a global workspace perspective. J Theor Biol 2007;245(2):378–390 [DOI] [PubMed] [Google Scholar]
- 6.Wallace R, Fullilove MT, Fullilove RE, Wallace DN., III Collective consciousness and its pathologies: understanding the failure of AIDS control and treatment in the United States. Theor Biol Med Model 2007;4:10 doi:10.1186/1742-4682-4-10 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Corburn J. Street Science: Community Knowledge and Environmental Health Justice. Cambridge, MA: MIT Press; 2005 [Google Scholar]
- 8.Chichareon SB, Tassee S, Wootipoom V, Bucharat R, Harnprasertpong J. Situation analysis for managment of abnormal Pap smears in the lower southern Thailand. Asian Pac J Cancer Prev 2005;6:286–294 [PubMed] [Google Scholar]
- 9.Chirenje ZM, Rusakaniko S, Kirumbe L, et al. Situation analysis for cervical cancer diagnosis and treatment in east, central and southern African countries. Bull World Health Organ 2001;79:127–132 [PMC free article] [PubMed] [Google Scholar]
- 10.DeMarco R. Conducting a Participatory Situation Analysis of Orphans and Vulnerable Children Affected by HIV/AIDS: Guidelines and Tools. Durham, NC: Family Health International; 2005 [Google Scholar]
- 11.Analysis of the Situation of the Children and Women in the Democratic People's Republic of Korea. Pyongyang, North Korea: UNICEF DPRK; October2003 [Google Scholar]
- 12.Fullilove MT, Arias G, Nuñez M, et al. What did Ian tell God? School violence in East New York. Moore MH, Petrie C, Braga AA, McLaughlin BL, eds Deadly Lessons: Understanding Lethal School Violence. Washington, DC: National Acadamies Press; 2003 [Google Scholar]
- 13.Mitchell JC. Case and situation analysis. Sociol Rev 1983;31:187–211 [Google Scholar]
- 14.Argyle M. The experimental study of the basic features of the situation. Magnusson D, ed Toward a Psychology of Situations: An Interactive Perspective. Hillsdale, NJ: Lawrence Erlbaum Associates; 1981 [Google Scholar]
- 15.Barker RG. Ecological Psychology. Palo Alto, CA: Stanford University Press; 1968 [Google Scholar]
- 16.Goffman E. Frame Analysis. Cambridge, MA: Harvard University Press; 1974 [Google Scholar]
- 17.Thomas WI, Thomas DS. The Child in America. New York, NY: Alfred A. Knopf; 1928 [Google Scholar]
- 18.Bronfenbrenner U. The Ecology of Human Development: Experiments by Nature and Design. Cambridge, MA: Harvard University Press; 1979 [Google Scholar]
- 19.Engel GL. The Clinical Application of the Biopsychosocial Model. Am J Psychiatry 1980;137:535–544 [DOI] [PubMed] [Google Scholar]
- 20.Leighton AH. My Name Is Legion: Foundations for a Theory of Man in Relation to Culture. Vol 1. New York, NY: Basic Books; 1959 [Google Scholar]
- 21.Vygotsky LS. Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press; 1978 [Google Scholar]
- 22.Wallace RM, Fullilove MT, Flisher AJ. AIDS, violence and behavioral coding: information theory, risk behavior and dynamic process on core-group sociogeographic networks. Soc Sci Med 1996;43(3):339–352 [DOI] [PubMed] [Google Scholar]
- 23.Glaser BG, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative Research. New York, NY: Aldine de Gruyter; 1967 [Google Scholar]
- 24.Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. 2nd ed.Newbury Park, CA: Sage Publications; 1994 [Google Scholar]
- 25.Wing S. Social responsibility and research ethics in community-driven studies of industrialized hog production. Environ Health Perspect 2002;110:437–444 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. J Biomed Inform 2002;35:52–75 [DOI] [PubMed] [Google Scholar]