Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Aug 1.
Published in final edited form as: Sociol Forum (Randolph N J). 2013 Sep;28(3):10.1111/socf.12034. doi: 10.1111/socf.12034

MOMENTS OF UNCERTAINTY: ETHICAL CONSIDERATIONS AND EMERGING CONTAMINANTS

Alissa Cordner 1, Phil Brown 2
PMCID: PMC3829201  NIHMSID: NIHMS522890  PMID: 24249964

Abstract

Science on emerging environmental health threats involves numerous ethical concerns related to scientific uncertainty about conducting, interpreting, communicating, and acting upon research findings, but the connections between ethical decision making and scientific uncertainty are under-studied in sociology. Under conditions of scientific uncertainty, researcher conduct is not fully prescribed by formal ethical codes of conduct, increasing the importance of ethical reflection by researchers, conflicts over research conduct, and reliance on informal ethical standards. This paper draws on in-depth interviews with scientists, regulators, activists, industry representatives, and fire safety experts to explore ethical considerations of moments of uncertainty using a case study of flame retardants, chemicals widely used in consumer products with potential negative health and environmental impacts. We focus on the uncertainty that arises in measuring people’s exposure to these chemicals through testing of their personal environments or bodies. We identify four sources of ethical concerns relevant to scientific uncertainty: 1) choosing research questions or methods, 2) interpreting scientific results, 3) communicating results to multiple publics, and 4) applying results for policy-making. This research offers lessons about professional conduct under conditions of uncertainty, ethical research practice, democratization of scientific knowledge, and science’s impact on policy.

Keywords: Ethics, sociology of science, risk and uncertainty, emerging contaminants, biomonitoring

INTRODUCTION

How do scientists make ethical decisions in the face of scientific uncertainty? What are the consequences of scientific uncertainty for research on policy-relevant topics? Sociological research has revealed many social, institutional, and political influences involved in scientific practices, consensus formation, the use of science in non-scientific spheres, and scientist identities (Frickel and Moore 2006; Gieryn 1983; Kuhn 1970; Shwed and Bearman 2010). However, the ethical issues involved in uncertain, emerging, and policy-relevant areas of research remain understudied.

Researchers and others who use scientific data in emerging or policy-relevant fields face numerous ethical uncertainties that arise because formal ethical guidelines and codes of conduct develop more slowly than do scientific advances, and because scientific research on modern sources of risk is often unavoidably uncertain. These scientific and ethical uncertainties are not necessarily problems; they can lead to advances in research methods, scientific understanding, and practical applications of science. However, typical approaches to uncertainty view it as a singular feature of a situation, when in fact, it is more complex. We identify four moments of uncertainty related to research in emerging or policy relevant scientific fields: 1) choosing research questions or methods, 2) interpreting scientific results, 3) communicating results to multiple publics, and 4) applying results for policy-making. In these moments of uncertainty, scientists find that traditional approaches, protocols, and paradigms neither adequately describe the current state of affairs nor provide clear guidelines for action. What scientists formerly did by pattern or custom no longer suffices, and in order to conduct what they view as ethical research, they may sometimes find themselves constrained in action, unsupported by existing ethical codes and standards, and in conflict with what is deemed acceptable practice by the larger research community.

As a case study, we examine environmental health controversies related to testing human bodies and personal environments for the presence of chemicals. We focus on stakeholders in academia, government, industry, and social movements who conduct their own or use others’ research on exposure to flame retardant chemicals. Flame retardants are widely used in consumer and household products, and are a type of emerging contaminant, chemicals for which little data is available and links to health are uncertain yet are of potential concern. Exposure research of this type is ideal for the study of scientific uncertainty because it is rapidly advancing and is often policy-relevant, and thus blurs the supposed boundary between science and policy.

Our analysis fills a gap in the sociological literature by showing how ethics, risk, and uncertainty are tied together, and are central to the entirety of scientific research as it is practiced by scientists and studied by sociologists, Science and Technology Studies (STS) scholars, and others. Our example provides lessons about responsible conduct of research, ethical communication of findings, and how science can be used to influence policy for a wide range of contaminants and other health threats. Hence our analysis has important implications for scholars of social movements, the embodied health experience, health behavior, and policy-making concerning uncertain phenomena.

BACKGROUND

Science and Ethics

Social studies of science recognize the breadth of methods and disciplines captured under the label ‘science’, and the powerful historical legacy of value-free research, in which scientific authority rests on science being seen as “value-free and politically neutral” (Kinchy and Kleinman 2003, 380). Indeed, disinterestedness was long ago identified as one of Merton’s institutional mores of science (Merton 1942). However, many sociologists of science and STS scholars have argued that science is as socially constructed as it is empirically based, because science is done by people, takes place within a social context, and involves the constant negotiation of boundaries between science and non-science (Gieryn 1983; Jasanoff 1987). Rather than being the gradual and apolitical accumulation of facts, superior findings, and incontrovertible truths, science develops through paradigm shifts (Kuhn, 1970), societal patterns of consensus formation (Shwed and Bearman 2010), scientific/intellectual movements (Frickel and Gross 2005), and seemingly non-scientific settling of related controversies (Latour 1987) that govern knowledge production. Additionally, science is inherently political and is influenced by networks, institutions, and power structures (Frickel and Moore 2006; Kleinman 2003; Moore 2008).

Scientific objectivity, autonomy, and neutrality are challenged by two conditions pertinent to the study of emerging contaminants: uncertainty and policy-relevance. Accusations of uncertainty decrease the authority of the science in question (Jasanoff 1987; Moore 1996), moving it from the realm of objective facts into areas of interpretation and value judgments. This creates ethical dilemmas about how to interpret and communicate uncertain results. For instance, regarding the uncertainty about whether low doses of a chemical cause health effects, there are ethical decisions about whether and how to inform study participants who may be at risk. This shows that scientific uncertainty often involves ethical questions, not merely statistical or methodological ones. Policy-relevance is important because questions in regulatory, legal, and social movement spheres are increasingly asked and answered in scientific terms, through scientization (Kinchy 2010; Michaels and Monforton 2005; Morello-Frosch et al. 2006). Participation in policy-making can undermine scientific authority, creating ethical dilemmas for scientists about appropriate levels of policy engagement. From the other side, policy-relevant research inspires non-scientific actors, such as legislators and activists, to be interested in scientific findings and to appropriate those findings for multiple uses. For both scientists and others, this blurs the line between science and policy-making, forcing scientists to be more accountable to multiple publics.

These dilemmas represent the kinds of scientific issues that become the subject of scientific ethics – the guidelines for decision-making that help scientists conduct research that protects the rights, well-being, and autonomy of researchers and their participants, as well as publics affected by research. Ethics is a fruitful area of inquiry, ranging from theoretical investigations of self- and societal-governance (e.g., Rabinow 1997), to empirical studies of bioethics and the medical profession (e.g., Timmermans and Oh 2010). However, the ethical implications of scientific uncertainty have received little sociological attention. We are interested in ethics as guidance about appropriate and thoughtful research behavior in scientific areas that are both uncertain and policy-relevant. We term this approach as reflexive research ethics (Cordner et al. 2012), which involves deliberate, continuous self-awareness and responsiveness at all research stages. This aligns with recent sociological attention to the importance of ethics at every stage of the research process (Blee and Currier 2011).

Ethics can be formal guidelines and governing bodies (i.e., professional codes and Institutional Review Boards, IRBs) and can also be informal (i.e., personal decision-making and research team discussions). Informal ethics can function at the individual level (e.g., individual researchers making decisions about their research practices) or research community level (e.g., widely-held support for a certain way of interpreting findings that has not been formally institutionalized). Institutional ethics requirements are laid out in overarching statements on ethical research, such as the international Helsinki Declaration on medical research ethics (WMA 2008 [1964]), the Belmont Report of the U.S. Department of Health and Human Services (HHS 1979), and the World Health Organization’s International Guidelines for Ethical Review of Epidemiological Studies and International Ethical Guidelines for Biomedical Research Involving Human Subjects (CIOMS 2002). Research ethics are often understood as static guides of professional conduct (Smith-Doerr 2006), equated with professional codes of conduct, or formalized through IRB standards for research integrity and protection of human subjects. But when scientific methods or interpretation of findings are subject to multiple interpretations, there are unlikely to be clear, universally-held ethical standards. Under such uncertainty, informal ethics become more salient because formal ethical guidelines have yet to be developed or universally accepted. This ideal-typical distinction between formal and informal ethics does not mean that we propose a normative preference for formal over informal ethics, nor that all ethical guidelines and practices fall neatly onto one side or the other. In the course of a project, a researcher is likely to practice science in ways that easily conform to formal ethical guidelines (e.g., professional codes necessitating informed consent) and that also require more informal ethical decision-making (e.g., research group discussions about how to ‘give back’ to the community being studied).

Risk, Uncertainty, and Ethics

How are these ethical issues dealt with under conditions of risk and uncertainty? In the presence of uncertainty, it is difficult to develop concrete and universally applicable ethical guidelines shared across stakeholders because the conditions being responded to and the consequences of interventions may be unknowable. Risk society theorists argue that modern society produces technological risks that are scientific, incalculable, potentially catastrophic, and highly uncertain (Beck 1992; Beck 1999; Beck, Giddens, and Lash 1994; Giddens 1999). This unknowable but pervasive nature of modern technological risks leads to growing awareness of risk and risk consequences by the general public. STS scholars deal with this by looking at “undone science,” areas of unfunded or incomplete science that are identified as worthy by civil society, and “non-knowledge,” awareness of knowledge limitations (Frickel et al. 2010; Gross 2007; Hess 2009). While risk society theory has been highly influential in sociology, researchers have not emphasized ethical reflexivity as a key component of the risk society. Apart from risk society theory, the sociology and psychology of risk demonstrates that perceiving, interpreting, and acting upon risk involves social factors beyond the calculated risk (Bradbury 1989; Freudenburg and Pastor 1992; Kasperson and Kasperson 1996; Tierney 1999; Vandermoere 2008). Risk perception is influenced by factors other than quantifiable outcomes, like whether the risk involves dread and is unknown (Slovic 1987) or whether the topic inspires public outrage (Sandman 1989). Connecting risk communication with public participation in science shows how uncertainty is incorporated into assessments of ‘real’ risk, and highlights the importance of interactions between scientists and the public (Wynne, 1996).

Environmental sociologists have highlighted how probabilistic dangers are situated in societal structures and how risk perception and misperception are socially produced. Modern technological risks (e.g., toxic contamination from waste sites) differ from natural hazards (e.g., floods) in that they are human-caused, seen as inevitable, involve a high degree of uncertainty, are embodied but often undetectable, and inspire dread and helplessness. Early work in this field studied community responses to acute contamination (Brown and Mikkelsen 1990; Erikson 1976; Lerner 2010; Levine 1982), and recently expanded to study the experience of chronic, daily exposures that characterize modern life (Altman et al. 2008). This shift parallels a trajectory in environmental health sciences to study daily exposure to ubiquitous chemicals. Additionally, STS scholars encourage research on uncertainty to be attentive to how actors themselves define ethics in their day-to-day practices (Callon, Millo, and Muniesa 2007). Our work contributes to this direction by linking scientific practices and research ethics with this new attention to low-level, chronic environmental hazards.

The tradition of examining uncertainty in sociology, risk analysis, and other fields usually look at uncertainty in the framework of the entirety of a situation. We argue instead that uncertainty occurs at various stages of the scientific process, with different issues, outcomes, and ethical decisions associated with those different stages. We base this on our analysis of recent concerns over biomonitoring and exposure research on flame retardant chemicals. In two brief sections following, we examine these research areas, which combine to set up an interesting setting for studying moments of uncertainty.

Biomonitoring, Results Communication, and Right-to-know

We develop our concept of moments of uncertainty through systematic analysis of the identifiable ethical concerns that arise in exposure research, which involves testing for chemicals in a variety of media (e.g., in household dust). Often this takes the form of biomonitoring, which measures the presence and accumulation of chemicals in the human body. Biomonitoring has long been conducted by academic researchers, the federal government (e.g., CDC’s biennial national biomonitoring program of its National Health and Nutrition Examination Survey, NHANES (Centers for Disease Control and Prevention 2009)), state governments (e.g., the California Biomonitoring Program), and by industry to test for occupational exposures. Two developments make biomonitoring a more burgeoning area. First, advocacy groups have begun to conduct advocacy biomonitoring, or testing people for chemicals with the goal of influencing policy, sometimes in partnership with academic researchers (Morello-Frosch et al. 2009). Second, ever-advancing analytical chemistry techniques allow researchers to test for greater numbers of chemicals at smaller and smaller concentrations, as low as parts per trillion. Similarities exist in the field of genetics research, where rapid scientific advances regularly outpaced researchers’ ability to fully interpret results and society’s ability to understand the social significance of the research (Bearman 2008; Featherstone et al. 2006).

Biomonitoring offers an excellent test case for researchers’ ability to communicate relevant findings and risks to “multiple publics” (Burawoy 2005; Nyden, Hossfeld,and Nyden 2012; Sellnow et al. 2009). Biomonitoring can be used alongside toxicology, to investigate dose-response questions, and epidemiology, to investigate health outcomes. However, scientific uncertainty is inevitable in exposure or epidemiological work on chemical hazards for many reasons, including: individual exposure pathways are often impossible to identify; many of the suspected health effects of chemicals take decades if not generations to emerge; people are exposed to hundreds of chemicals on a daily basis, which may act in synchronistic, antagonistic, or cumulative ways; and chemical exposures interact with other individual factors, like genetics, to influence potential health outcomes. Human experimentation is severely limited because of the ethical issues involved with testing potential toxicity on people; hence, studies linking human chemical exposure and health outcomes are usually observational, case-control, or cohort studies, rather than experimental. All these factors combine to cloud an issue like the hazards of flame retardants with scientific uncertainty, making ethical considerations particularly salient.

Biomonitoring data can provide individuals with knowledge about their personal body burden, their individual level of chemical contamination. This intimate form of environmental contamination knowledge can create an exposure experience that combines their personal awareness of exposure and understanding of relevant scientific findings with the larger social construction of that exposure (Altman et al. 2008). Biomonitoring data brings up an ethical right-to-know issue, a pressing ethical concern for biomonitoring researchers, and one that grows out of existing policies requiring businesses to report their use of chemicals through the Toxics Release Inventory (Brody et al. 2007). Although most researchers and public health officials agree that biomonitoring results should be communicated when exposure levels can be linked to predicted health outcomes (as is the case with lead or mercury exposure and developmental deficiencies), there are competing professional ideas about when and how results should be communicated to research participants when uncertainty exists about how a personal body burden translates into individual health outcomes. This requires researchers to be thoughtful and deliberate when they communicate individual-level results to participants.

With the growing importance of biomonitoring in science, policy, regulation, and public perception, it is critical to understand implications for study participants, communities, healthcare providers, and policymakers. Risk communication literature provides some guidance on how to effectively report results in ways that build trust and understanding (National Research Council 1989). Yet there is little information for scientists to deal with considering the ethical dimensions of decisions about when, what, and how to effectively report individual exposure data to study participants. The National Academy of Sciences’ (2006) report on biomonitoring highlights new challenges about how to interpret, report, and act on results that only partially elucidate links between environmental chemicals and health, and specifically calls for empirical investigation of the experiences of participants in biomonitoring research. Other agencies, including the European Commission (Bauer 2008) and the U.S. Environmental Protection Agency (United States Environmental Protection Agency National Exposure Research Laboratory May 2008), also identify the need for ethical guidance on reporting exposure measurements to participants. Significantly, the new California biomonitoring program, the nation’s first state-level program, has a legislative mandate to report data back to people.

Flame Retardant Chemicals

Flame retardants are a particularly interesting example of the uncertainty surrounding biomonitoring and exposure research because they are ubiquitous and unavoidable, many are loosely regulated, and health guidelines on safe exposure levels rarely exist. Additionally, they raise concerns about potential trade-offs between fire safety and environmental or health outcomes. The amount of research on flame retardants specifically and biomonitoring generally has greatly increased in the past decade, pushing these areas to the forefront of environmental health science (an overview of this history can be found in Brown and Cordner 2011). However, the question of whether flame retardants represent a risk to public health that warrants further regulation remains controversial because of uncertainties about fire danger, chemical hazards, public exposure, health and environmental outcomes, and replacement chemicals.

Chemical flame retardant manufacture began in the 1960s (MacGillivray, Alcock, and Busby 2011), and their use in the U.S. is driven by fire safety regulations and flammability regulations that require products to meet minimum performance standards. Flame retardant chemicals are widely used as additives to consumer and household products, including electronics, car interiors, insulating foams, and carpeting, in order to slow combustion. Many current-use flame retardants are chemically similar in form, function, and history of use to those banned because of health concerns (Blum 2007; Hooper and McDonald 2000).

People are exposed to flame retardants through household dust, physical contact, ingestion, smoke, and contaminated air (Betts 2004). Flame retardant chemicals are ubiquitous and some formulations are rapidly bioaccumulating in the environment, in wildlife, and in people (Betts 2009; Hites 2004; Rudel and Perovich 2009). Early research on a class of flame retardants, the polybrominated diphenyl ethers (PBDEs), came from European biomonitoring and environmental contamination studies (Meironyte, Noren, and Bergman 1999). Swedish researchers began conducting biomonitoring of breast milk in the 1970s, and documented a 50-fold increase in PBDE levels by 1998 (Eckley and Selin 2004). PBDEs have been shown to be endocrine disruptors, toxic in laboratory animal studies, and potential risk factors for autism spectrum disorders (Messer 2010). Although most PBDEs are no longer manufactured (for reasons we explain below), exposure remains widespread because they were largely used in consumer products (like furniture or electronics) with long product lifecycles. Recent observational epidemiological research has concluded that PBDE exposure is associated with adverse neuro-developmental outcomes in children (Herbstman et al. 2010; cf. Roze et al. 2009), male reproductive disorders (Meeker and Stapleton 2010), decreased fecundity in women (Harley et al. 2010), and lower thyroid hormone levels during pregnancy (Chevrier et al. 2010).

Several types of flame retardants were regulated in the 1970s following dramatic episodes of contamination or exposure related to tris flame retardants in children’s pajamas (Abelson 1977; Blum and Ames 1977; Consumer Products Safety Commission 1977) and the poisoning of livestock in Michigan by polybrominated biphenyl flame retardants (Egginton 2009 [1980]; Reich 1983). Recently, PBDEs have been banned at the state and federal levels (NCSL 2011; U.S. EPA 2006; U.S. EPA 2009). The international use of flame retardants continues to grow (Fink et al. 2008) and chemical companies are developing numerous replacement chemicals.

With this flame retardant example, we move beyond the current scholarship on ethics and scientific uncertainty by arguing that uncertainty is complex, not a singular feature of a situation. In the following analysis, we identify these multiple moments of ethical uncertainty, and show how they arise partly because formal ethical guidelines and codes of conduct often develop more slowly than do scientific advances in new areas of inquiry, and partly because scientific research on modern technological risks is often unavoidably uncertain.

METHODS

This paper is part of a larger project on the social implications of flame retardant chemicals. This analysis draws on interviews with 116 respondents, including scientists, regulators and legislators, flame retardant industry representatives, supply chain representatives, fire safety experts, and activists from environmental and health social movement organizations.1 Potential respondents were identified through public flame retardant work, such as publications on flame retardant topics, references or quotations in the media, or the involvement of their advocacy organizations in flame retardant campaigns. Additional respondents were identified by snowball sampling from referrals in interviews. All respondents were recruited with emails and follow-up phone calls.

Interviews were conducted around the U.S. between June 2009 and December 2011 in person whenever possible, with 20 interviews conducted over the phone. Interviews were semi-structured, and lasted approximately 60 minutes. Questions touched on broad topic areas including: the respondent’s professional trajectory, their work on flame retardant chemicals, the relationship between activism and science, the chemical industry, biomonitoring and other exposure research, risk and hazard assessment, and the regulation of environmental hazards. Interviews were recorded and transcribed with all identifying information removed to protect confidentiality. Transcripts were analyzed in NVivo 8.0, a program for qualitative data entry, coding, sorting and retrieval. Data coding was done in an iterative fashion, with additional readings of the transcripts leading to additional codes as more was learned from the materials. This paper also draws on a variety of primary and secondary sources, including published scientific research papers, conference presentations, and observations of social movement organizations and state and federal regulatory agencies dealing with flame retardant regulation.

FINDINGS

The scientific research process involves a continuum of decisions and practices that we have divided into four general moments of uncertainty in areas of emerging or policy-relevant science: 1) choosing research questions or methods, 2) interpreting scientific results, 3) communicating results to multiple publics, and 4) applying results for policy-making. For each moment of uncertainty we identify ethical issues that emerge in response to uncertainty, with the understanding that formal ethical guidelines develop more slowly than do new forms of science. These moments are similar to our previous framework of “doing, interpreting acting,” a cyclical process of knowledge-production in areas of controversial science (Brown et al. 2006). That model reflected challenges to the scientific, medical, and regulatory communities concerning contested environmental illnesses which involved differing opinions on environmental causation (Brown et al. 2006). We extend this framework to be more representative of the ethical dilemmas discussed by respondents in this current study by distinguishing between communicating findings and acting on findings for policy-making, two moments that would have fallen under “acting” under the previous framework. This distinction between communicating findings to multiple publics and using findings for policy-making is particularly important in policy-relevant and emerging research because participants and/or communities are quite different publics from policy-makers, and the public or media spheres differs substantially from the regulatory sphere. Further, acting with regard to policy may have ethical concerns defined more by formal roles of scientists. Communicating to participants and communities is laden with additional ethical concerns that affect people quite directly and individually. These moments may be simply occasions that require researchers to make a significant, even difficult, decision. Or, they may be times where the uncertainty of results, the policy-relevance of the research, or the input from non-scientific stakeholders further challenges traditional notions of scientific autonomy and objectivity. While moments of uncertainty can be considered landmarks or stations along the pathway of a paradigm shift (Kuhn 1970), they may exist without a paradigm shift, simply being critical points at which ethical concerns present themselves and must be addressed. At these moments, scientists must act in new ways, personally and collectively, and deliberate action regarding research ethics is required to move researchers beyond the ethical guidelines of formal codes of conduct.

Uncertainty 1: Choosing Research Questions and Methods

The first area of uncertainty that we identify in areas of rapidly expanding or policy-relevant research relates to the production of scientific knowledge, including choice of research questions, methods, or practices. This moment of uncertainty leads to ethical tensions which can be unresolved if formal ethical guidelines lag behind the development of novel methods or are do not adequately prepare researchers or practitioners to deal with the relevance of findings for non-scientific purposes. Ethical questions that can emerge regarding the production of policy-relevant research or the development of novel methods or scientific practices include: How should research questions be chosen? What methods are ethically appropriate? How should ethics of newly developed methods be evaluated? What are the diverse roles of scientific organizations, professional associations, government agencies, and IRBs?

Ethical concerns have emerged regarding testing breast milk for chemicals, a well-established research method that can now be used to test for a larger number of chemicals and much lower levels of chemical exposure. Breast milk is often a preferred body fluid for biomonitoring for persistent organic pollutants, including many flame retardants, because these chemicals are fat-soluble and therefore accumulate in breast milk (Fenton et al. 2005). Also, collection of breast milk can be done by the woman herself and is considered by researchers to be less invasive than methods like drawing vials of blood. Researchers and the public are also interested in breast milk because of its important role in early childhood development.

Breast milk biomonitoring can be controversial because of concerns that the reporting-back of individual contamination levels may discourage breastfeeding (Morello-Frosch et al. 2009), thus potentially balancing identifiable benefits to science with and uncertain benefits and harms to participants. Breastfeeding provides important health benefits compared to formula feeding, even when the milk contains chemicals (Hooper and She 2003; Jorissen 2007), but some academics and IRBs worry that a woman’s knowledge of her body burden could impact her decision to breastfeed. Yet, evidence supporting these fears is based on hypothetical situations described in survey research (Geraghty et al. 2008) or is anecdotal (Gross-Loh 2004). A recent study of women who shared their breast milk for a biomonitoring project found that while some women in this study stated they became more aware of chemical exposure and some made lifestyle changes, none of them chose to stop breastfeeding or to breastfeed for a shorter period of time because of their study results (Wu et al. 2009).

As an example of this ethical concern around breast milk biomonitoring research, one scientist’s IRB initially denied a research protocol that would have tested for chemicals in breast milk on the grounds that the scientific benefits of learning about breast milk contamination did not outweigh the potential risks to women learning about personal contamination. That researcher looked to existing sociological and public health research showing this fear to be unjustified, and eventually did receive approval through another IRB to conduct their research. The researcher’s trajectory points to both the ethical conflicts researchers can have with institutions when designing novel research programs, and to the absence of consistent institutional ethical guidelines in such areas of research. Because different IRBs have vastly contradictory stances on such issues, the ethical choices are by no means clear. Thus researchers must be attuned to the many moments at which choices come up.

Many of our respondents said that biomonitoring in breast milk was important, for both scientific and advocacy reasons. Scientists emphasized the importance of fetal and infant development, especially the potential impact of toxic chemicals on children’s development. For example, an environmental health scientist told us, “it’s very important scientifically because babies are developing, and so breast milk exposures have the potential to have effects beyond the effects on adults.” This scientist went on to recognize that breast milk biomonitoring can play an important advocacy role, by attracting public attention: “from an advocacy point of view, it’s just heartbreaking to think that you’re downloading your exposure to your child.” Advocates echoed this perspective, saying that breast milk “carries weight” or “has oomph to it.”

Beyond importance of breast milk biomonitoring, scientists acknowledged the ethical concern that such information needed to be carefully communicated and provided in combination with information on the benefits of breastfeeding. A researcher commented:

I think whenever you analyze anything in breast milk, whether it’s flame retardants or any other chemical, you really need to be careful that you don’t send a message about the values of breastfeeding, because there are so many health benefits that has been shown associated with breastfeeding. You … want to make sure that it’s conveyed to the mother or the participant in general that we’re simply using breast milk as a measure of their body burden.

A specialist on risk assessment who had been involved in a breast milk biomonitoring project showed how researchers could take voluntary action to ensure that their research was ethically conducted: “we also had a link back to the breastfeeding community through an advisory committee that gave us feedback on how best to do this in a way that didn’t discourage breastfeeding.” This individual’s description shows how scientists engaging in emerging science developing their own protocols that conform to informal standards of what ethical research should be, even when formal ethical guidelines do not exist. In this case, there is a growing consensus among researchers who study breast milk that it is ethically important to provide pro-lactation information and resources to participants, thus taking an affirmative standpoint against a restrictive approach that would argue that it is unethical to conduct breast milk monitoring or to report results to participants.

These examples highlight the finding that scientists are aware of ethical questions raised by new technologies that detect chemicals in breast milk, especially when full knowledge on the health impacts of those chemicals is unavailable or uncertain. Additionally, stakeholders place different emphasis on different stages of the research process, for example, scientists prioritizing high-quality scientific practices over the impacts on participants. This also demonstrates how researchers may adopt novel and controversial methods, such as the testing of breast milk, before formal ethical guidelines are developed or uniformly applied by disciplinary governing bodies or IRBs, forcing researchers to confront the uncertainty of their own research as well as the impact of research findings on participants and their behaviors. Thus the examination of this moment of uncertainty is comparable to other areas of science where research methods and practices develop more quickly than do formal ethical practices.

Uncertainty 2: Interpreting Scientific Results

The second moment of scientific uncertainty we identify involves the interpretation of scientific findings. Researchers working with novel methods, in relatively new or still controversial areas of research, or areas of research with high levels of policy-relevance must deal with questions including: How should findings in one area of research be connected to findings in another area of research? How should findings be interpreted when the topic remains contested or inconclusive? How should researchers interpret the risk of a chemical they are studying when their research project evaluates only hazard or exposure but not both? Even when strong scientific guidance exists in the form of internationally recognized criteria documents (e.g., WHO 2008), some uncertainty can remain. For example, the U.S. Environmental Protection Agency has published criteria for evaluating low, moderate, or high hazards for many toxicity endpoints but not for endocrine disruption, because that field of science is seen as too undeveloped and uncertain (OPPT 2011).

Stakeholders expressed several types of concerns related to interpreting individual-level biomonitoring results. A public health scientist shared concerns about interpreting extreme values:

For environmental exposures you have a log-normal [population distribution], so you have a lot of people low, and then it goes up, so there are people on the extreme ends who are orders of magnitude higher exposure than the median… What [does it mean] if you’re that person who represents the 99th percentile exposure or body burden?

This quote highlights scientists’ ethical concerns that they are unable to accurately decipher the meaning of individual-level exposure levels for the individuals’ well-being, as well as epistemological unease that they may lack full knowledge of the significance of exposure levels. Others expressed concerns about the accuracy or meaningfulness of one-time testing. An environmental health activist said of biomonitoring, “you’re taking a snapshot of what was found at that moment of collection. So some of these chemicals… you may not be exposed to them or they may not be able to be measured at a given time.” Thus scientists expressed concerns that biomonitoring data could be easily misinterpreted as providing evidence that exposure did not occur, and thus the researchers risk providing inaccurate information to study participants.

Respondents also noted the difficulty in connecting individual-level results to past exposures or future health outcomes. One biomonitoring participant said, “I have phthalates [endocrine-disrupting chemicals used in plastics] in me. And do I know when I was exposed to that particular phthalate?… If I have a health effect that might be related to phthalate exposure, I won’t be able to prove that my exposure in a certain time led to a certain health outcome.” This is particularly the case in light of the numerous other individual-level factors that contribute to health outcomes. An environmental health organizer said, “we definitely try and use them [biomonitoring data] with a grain of salt… Scientifically, just because person A has twice as many, you know, PCBs as person B, doesn’t mean that person A is going to have a negative health outcome related to that, as opposed to person B.” This highlights the uncertainty involved with connecting biomonitoring data to subsequent health outcomes. Scientists often simultaneously acknowledged the uncertainty and defended the scientific nature of their work. One defended science as being inherently interpretative: “different scientists will look at the data and can come to different conclusions, and… they’re both practicing good science.” The fact that different scientists can reach different interpretations did not discount the validity of the research or their ability to conduct ethically-grounded research.

Many respondents noted that biomonitoring studies often look at a small number of chemicals, compared to the thousands to which people are routinely exposed. One scientist told us, “scientists study each of these individually but the reality is we’re not exposed to them individually. We’re exposed to them like a cocktail.” In each of these examples, formal professional guidelines do not exist to tell researchers how to ethically interpret individual-level findings, since the ability to conduct biomonitoring research is outpacing scientists’ ability to link levels in the human body to exposure sources or health outcomes.

Interpretation ethics also involves the ways that scientists publish their results. When analyzing scientific papers, we can contrast the constant “further research is needed” caveat with the occasional claim for policy action. A 1977 Blum and Ames study identified a flame retardant used in children’s pajamas as a mutagen, and concluded with a recommendation that the compound not be used. Only rarely do current-day epidemiologists and exposure scientists discuss safer substitution, chemicals reform, or other policy implications in their scientific publications. Some scholars do write up their research in a way that highlights the potential for concern even in the absence of human health data, for example by measuring flame retardants in house dust and noting that they are identified carcinogens (Stapleton et al., 2009). In short, the uncertainty around biomonitoring data and exposure research on emerging contaminants raises concerns about ethical interpretation with regards to population-level distributions, exposure sources, and predicted health outcomes.

Uncertainty 3: Communicating Findings to Multiple Publics

Uncertainties related to interpreting scientific data connect directly to the next moment of uncertainty, regarding communication of research findings to multiple publics, including individual participants in the research project, larger groups or communities interested in the topic, and the general public. General ethical questions that arise include: Who owns community- or individual-level data? If results are to be shared with participants, how should results be presented and at what point along the research-dissemination timeline? Is it better to present the findings in technical terms or to simplify results to make them more comprehensible? How should results be tailored to specific audiences?

Formal ethical guidelines for communicating biomonitoring results often leave significant decisions up to the researchers’ discretion. For example, the International Society for Environmental Epidemiology (ISEE) has issued a 19-age “Ethics Guidelines” for the profession, in which they state, “All research findings and information vital to public health should be communicated to stakeholders in a timely, clearly, comprehensive, understandable, and responsible manner…” (ISEE 2012, section 3.5.1, emphasis added). What counts as “vital” public health information is left up to the individual scientist. Interview respondents identified this uncertainty as central to biomonitoring research. Three themes emerged in interviews about the communication and interpretation of individual-level results, related to how much information should be provided to participants, what type of information should be provided, and IRB challenges in sharing information.

First, some stakeholders expressed the belief that individual participants should have as much information as the researchers about their individual-level results. For example, a public health scientist who has conducted biomonitoring projects remarked:

I like the idea of reporting to people because I feel like it respects… them as adults, as people who can think for themselves. We shouldn’t baby them and treat them as they can’t handle the information, because the argument is, ‘oh, it’s going to cause unnecessary stress, people are just going to get all flustered and upset, you know, and that’s bad.’ I just don’t think that’s true… I’ve asked to analyzed things in their blood. It doesn’t seem right that I don’t tell them the results.

This person’s informal ethical stance echoes formal ethical guidance, such as the National Research Council report on biomonitoring, which states that “subjects should be told (or offered the chance to be told) whatever researchers know (or do not know)” (National Academy of Sciences 2006, 101).

A second theme that emerged in our interviews was concern over what types of information should be communicated. Researchers talked about how individual participants’ results compared to others in the study or those in other studies, including national averages from NHANES. They also said they would combine information on personal-level exposures with information on how to reduce exposures. A public health scientist described how they might do this with PBDEs used in electronics: “We could look at their data and tell them, like, well it’s high and maybe it’s … driven by deca [a type of PBDE] and that’s really more from electronics. And maybe there’s, maybe they’ve got like a home office that is loaded to the gills with electronics.” However, many respondents emphasized that they often had little to tell people about how to reduce exposure to flame retardants like PBDEs, because the chemicals are so ubiquitously used. A public health scientist described, “so it [PBDEs] is in just about everything. It’s in your home, it could be work, it could be your car.” Similarly, an activist said, “you could be… doing everything possible to make sure that you’re not putting those chemicals into your body… but still everybody carries a body burden of chemicals in their body.” These perspectives highlight the difficulty in ending exposure through personal consumption measures, and thus the uncertainty that scientists face when offering advice or explanations to their research participants about exposure.

Finally, several scientists observed that formal ethical guidelines, such as those outlined by university IRBs, were inadequate to address the full range of ethical concerns raised by biomonitoring studies. We already described the case of a researcher initially denied IRB approval for breast milk testing. A different researcher was told by the IRB that their project could only use “passive report-back,” in which participants were told in a letter that if they wanted to get their results they could phone the researcher. This led to absolutely no requests for information. In contrast, comparable studies using personal report-back find that 95% of participants choose their results (Brody et al., 2007).

Another scientist had worked with an IRB that sometimes allowed report-back of individual-level results and other times didn’t allow it: “they [the IRB] are totally inconsistent… It’s actually a very clear message that they don’t really know what they want to do, and it depends on who you get, on what day, and in what mood they are in.” This researcher found that there are no formal, universally-applied guidelines, even within a single institution’s IRB, about how report-back should happen. Thus the ethics of biomonitoring results communication are still unclear for some researchers and their institutional partners. Indeed, the NIH Institute best suited to offer guidance on this topic, the National Institute of Environmental Health Sciences, does not provide models for sharing data.

Several additional ethical considerations emerge regarding communicating results to publics beyond individual study participants. Many respondents said that biomonitoring was a type of science that is very appealing for the general public, because it simplifies a complex exposure pathway for the public and provides scientific documentation of exposure. Two main ethical concerns related to broadly communicating biomonitoring results were identified by respondents. First, many respondents described how difficult it is to distill complex and uncertain science into ‘sound bites’ that can be used by the media and understood by the general public. For example, a regulatory scientist argued that it was difficult to convey to the public that exposure does not equal harm: “They just don’t understand that a tiny [dose]… doesn’t necessarily mean harm.” Another environmental health activist commented, “you get a level of a particular chemical, and that doesn’t tell you anything about the biological impacts of that. But that’s getting too complicated for the general public.” Other respondents were concerned that the complexity of the science was often pushed aside when findings were broadly communicated. Because communicating findings is often seen as part of the scientific role, the uncertainty involved in communication can be a source of ethical ambiguity for researchers who must present accurate data, even when the full implications of those results may be unclear. A regulator said of biomonitoring researchers, “they’re using exquisitely sensitive instruments, you know, they’re down there at the level of detection. You’re hundreds of thousands of times away from what we’re seeing in toxicity in animals, but they’re calling it a big concern because we’re measuring it in your body… Big deal… there’s no effect, you’re down to the noise.” These perspectives emphasize the importance of interpreting biomonitoring data in a scientific context. This uncertainty can inspire ethical dilemmas for the researchers involved in interpreting and communicating results when health effects are a potential outcome and when the findings may be policy-relevant.

Second, biomonitoring data is frequently communicated outwards by non-scientists, including the media and activists, and hence researchers face decision points about how to communicate data to those actors. Although many researchers are uncomfortable speaking to reporters, some with whom we spoke emphasized the need to reach out to the public or the media. As one epidemiologist commented, “I was really interested in that balance of helping get science out to the public, but also helping them understand what the weaknesses are, what the flaws are, why we’re not always a hundred percent sure about things.” This highlights the need for careful ethical reflection regarding uncertainty surrounding the communication of findings to the public.

This uncertainty is relevant for other forms of scientific and medical communication. In a society where increasing technology makes more scientific data available and communicable, lay and expert stakeholders alike must re-assess their social roles and consider rights and responsibilities about sharing data. For example, researchers engaged in community-based participatory research often share results with communities prior to academic publication (Minkler 2004; Minkler and Wallerstein 2008). The strengths of scientific engagement and communication affect not only individual participants, but broad publics and larger communities of people. When researchers put themselves out as a type of public intellectual, they raise the stakes of their expertise to a more visible level.

Uncertainty 4: Applying Findings for Policy-Making

Finally, significant uncertainty exists about translating scientific findings into environmental and health policy. As we noted above, regulations require scientific input and policies are often justified in scientific terms, but simultaneously science and policy-making are seen as separate social spheres. Ethical questions related to this moment of uncertainty include: What is the appropriate involvement of researchers in applying their research findings to policy decisions? Should scientists be involved in public debates? How should uncertainty or lack of data inform health and environmental regulation?

The potential of biomonitoring research to contribute to regulation remains unclear and contested. Stakeholders expressed different positions about how biomonitoring data should inform regulation and policy-making on environmental and health issues. In several interviews, respondents used the activist slogan that “the trespass is the harm.” For example, one told us, “the first question you got to ask is… why are they [chemicals] there? You know, they don’t belong there, there’s something wrong. And basically, they are the canary in the coal mine.” Others argued that the presence of a chemical in the human body is evidence that the product is not keeping people safe. For example, an organizer noted that biomonitoring “points out a flaw in the system.” A risk assessment scholar noted that exposure to a chemical represents a “design failure” of the product in which the chemical is used.

Biomonitoring can also be used to identify regulatory priorities by identifying chemicals of concern and uncovering high exposures. A regulator argued that biomonitoring data on PBDE flame retardants justified swift, precautionary regulation: because PBDE exposure follows a log-normal distribution, there are presumably “millions of women” with very high exposures, “so it becomes extremely important that you know what this compound does and that, you know, you start to limit its use, because… you cannot avoid exposure.” For this regulator, biomonitoring data could document exposure and demonstrate that an emerging contaminant should be treated as a public health priority.

Many other respondents noted that biomonitoring data was useful, but was only one piece of the puzzle. For example, several industry respondents reiterated that exposure measures provided no information about chemical hazards, or that “risk is a function of hazard and exposure.” Others said that, because all people carry a complex body burden of thousands of chemicals, it is not realistic to ban a chemical just because it appears in human tissues. A consultant said that biomonitoring data is “just one piece. I mean you can’t… start restricting hundreds of chemicals just because they appear in the human body.” This echoes published commentaries by exposure scientists and industry representatives that exposure data must be combined with toxicological and dose-response data (Bahadori et al. 2007).

Some scientists also expressed discomfort with what they saw as liberties being taken with interpretations of research in the pursuit of environmental and health policy. One said of environmental organizations, “it’s a political battle that they are fighting, and in a political battle, they are not really clinging tightly to the science… They are not interested in science in the same way that I am.” An industry consultant commented, “NGOs are great at raising alarms, but they’re not great at taking the science and actually using it properly.” But respondents also argued that even uncertain science could and should be used for policy-making. Some scientists also engage in boundary-work by distinguishing between their production of scientific data and the consumption of the data by others. For example, one scientist said, “my results are out there for other people to draw whatever conclusion they’re going to. Now, maybe I’m not comfortable with the conclusions they’ve drawn, but that’s OK. That’s their conclusions, not mine.” The ethical implication is that the use of science by non-scientists may be flawed, but that is not the responsibility of the scientist because it is not part of their professional role.

Individuals working on flame retardants recognized and grappled with two unresolved ethical questions about the use of biomonitoring data to inform policy. First, should scientists push for policy based on results from their biomonitoring studies? For example, the ISEE Ethics Guidance document encourages scientists to provide “objective research” for policy-making, but states, “There is no consensus among ISEE members as to whether environmental epidemiologists have a duty to go beyond objectively communicating or to become policy advocates” (ISEE 2012, section 1.4). This issue raises further ethical dilemmas mentioned above about scientific objectivity, credibility, and involvement in the regulatory or legislative spheres. Second, should biomonitoring research projects be designed specifically with policy goals in mind? Advocacy biomonitoring projects are usually currently conducted with the goal of influencing public opinion and regulation, but our interviews show that many academic and government researchers are hesitant to see direct engagement with policy-making as part of their role as scientists. The ethical tensions involved in direct policy application are connected to those of serving as a “public intellectual” by communicating findings with multiple publics. This is relevant to many forms of sociology, e.g. a demographer advising on contraceptive policy, a family sociologist acting on domestic violence legislation, or a development sociologist considering whether to advise a government on local democratic participatory initiatives.

CONCLUSION

In this paper, we show how scientific uncertainty is multifaceted by identifying four moments of uncertainty that emerge in the experience of stakeholders working on novel forms of scientific research in policy-relevant fields. These moments are related to the production, interpretation, communication, and use of scientific knowledge, and we identify corresponding ethical questions and ambiguities that must be considered in individual researchers and research communities working in these fields. Emerging science and technology research – whether biological, natural, or social science – will always include moments of uncertainty, which lead to ethical questions unasked in the past and unanswered by existing formal ethical guidelines. Research ethics are guided in an overarching way by universal ethical principles, such as the justice, beneficence, and respects for persons described in the Belmont Report (HHS 1979). In practice, however, ethical research conduct involves nuanced and competing interpretations of these principles. This is especially the case when science is developing, contested and uncertain, and clear guidelines for researchers may not exist about how to conduct, interpret, and act on research and research findings. In these moments of uncertainty, informal research practices and scientific decision-making becomes increasingly important, because there are not clear lines to demarcate acceptable or ideal practices.

Research ethics can have profound impacts on research practices. By restricting certain practices and encouraging others, ethics drive research methods and the interpretation and communication of results. Once those practices are institutionalized as professional ethics codes or best practices they can cement professional styles and play a key role in governing relationships among professionals and between professionals and their clients. Addressing new ethical issues can lead researchers to develop new subfields, such as results communication of environmental monitoring. On a societal level, ethical codes also create and sustain public acceptance and trust. In the absence of formal ethical guidelines to address emerging problems, scientists and other parties develop informal ideas and practices about how to conduct ethical research and what to do with their findings. On occasion, previously informal practices can become institutionalized, as is the case with required report-back of biomonitoring data in the California state biomonitoring program.

As part of this process, some stakeholders move to incorporate scientific uncertainty into what counts as legitimate science. The boundary around legitimate science determines what evidence and whose voices are admissible into the regulatory process. This troubled relationship between scientific boundaries and certainty is relevant for other policy-relevant fields, particularly when action may be needed in spite of continued uncertainty. Our research demonstrates that not only are there ethical consequences of acknowledging scientific uncertainty; there are also ethical consequences of treating science as though certainty is the goal. This finding is broadly relevant. For example, institutional action on climate change requires researchers, policy-makers, and the public to make decisions based on a weight-of-evidence rather than complete scientific certainty. Or, NIH expert panels can recommend medical interventions based on a preponderance of positive trials, despite lack of complete certainty.

Such moments of uncertainty create the need for social science investigation of research ethics – their development, controversies, and practices. It is easier to find moments of uncertainty when we are in the midst of an evolving situation, because the nuances of the situation can be lost in retrospective accounts and the tendency to impose consistent storylines. This calls for more attention to real-time sociological analysis that examines unfolding moments of uncertainty as challenges that may lead to paradigm shifts. A paradigm shift involves not only new scientific understanding, but also accompanying shifts in the social relations by which scientists and others relate to that shift. By analyzing a larger number of unfolding scientific controversies, we will be better able to examine the extent to which moments of uncertainty offer productive pathways for new solutions. Flame retardant chemicals and biomonitoring research on these chemicals are particularly important examples of ethics and scientific uncertainty because these chemicals have been at the cutting edge of exposure research, and because the causes, levels, and consequences of personal exposure are often highly uncertain. Biomonitoring is often directly policy-relevant, and as such is of interest to a range of non-scientific stakeholders, including industry actors, government officials, environmental and health activists, and the public. Thus our case study offers important lessons about professional conduct under conditions of uncertainty, ethical research practice, democratization of scientific knowledge, and links between transparency and public trust. It also contributes to the STS literature on expertise and public participation in science (Collins and Evans 2002), and work on other forms of data ownership, such as patient access to medical records (Delbanco et al., 2010).

Because biomonitoring research invokes questions about whether and how to communicate results to study participants, it raises ethical questions related to health communication and behavior, but also ethical questions of human rights, including the right to personal information (Bingham, 1983). As the individuals with whom we spoke negotiate the most appropriate actions to take in their professional lives – be that communicating the results of a scientific study to individual participants or deciding how research should be used to inform policy – they are confronted by the fact that formal ethical guidelines may not clearly demarcate appropriate professional action in areas of new and uncertain science. In dealing with the coupling of scientific uncertainty and ethical uncertainty, IRBs have a role to play, to encourage researchers to be thoughtful in how they design their research and communicate findings. However, some IRBs have erred on the side of communicating less instead of more information, thereby keeping findings from those most affected by it. This is yet another illustration of how inaction (and not just action) related to uncertain science has ethical implications.

The moments of uncertainty we have identified in this paper are relevant to social scientists’ research on data ownership, power relationships between researchers and participants, and knowledge translation to the public. For example, a sociologist studying any policy-relevant issue must grapple with questions about how and when to share research findings with the communities they are studying, and whether to present their work only to academic audiences or to also develop outreach materials for regulators, politicians, the media, and non-academic researchers. These findings are relevant to other areas of uncertain and emerging science, such as nanotechnology (Sandler 2009), genetics and DNA research (Bearman 2008; Prainsack and Gurwitz 2007), and brain imaging (Booth et al. 2010).

Scientists also find their actions constrained by deeply institutionalized standards of objectivity and neutrality that may seem inappropriate in research areas that simultaneously uncertain and highly policy-relevant. Individuals engaged in work around these types of issues, therefore, must also rely on informal ethical practices. These practices can be as individual and informal as deliberate personal reflexivity when it comes to the ethical practice of research – the reflexive research ethics we have written about elsewhere (Brown and Cordner 2011) – or can be institutionalized through research collaboratives and Community Advisory Boards.

Public sociology has re-invigorated earlier discussions of sociologists’ responsibility to society in terms of information sharing and public participation in science. Further, we view the development of professional ethics as a central component of the overall development of professions, a major facet of sociological inquiry. Because these ethical considerations are centered on scientific practices, risk assessment, and uncertainty, they offer a sociological insight into new developments in the sociology of risk. Beyond our interest in emerging science, any social process or institution will undoubtedly face multiple moments of uncertainty, rather than singular uncertainty, and this analysis provides an approach to looking at those interesting, crucial, and formative moments.

Thus the interaction of ethics and uncertainty has a broad applicability to many forms of scientific and technological innovation. Such innovation brings with it multiple phenomena: it expands markets; it opens up new scientific research areas, some more bench-oriented and others more public health-oriented; and it requires new regulatory frameworks. In the process scientific and technological innovation elicits input from public, social movements, and advocates, which often creates contestation. The end result is that all the above points combine into a major public policy arena.

Acknowledgements

This research is supported by the National Science Foundation, grant SES 0924241, and was partly developed under Environmental Protection Agency’s STAR Fellowship Assistance, grant FP-917119. It has not been formally reviewed by the National Science Foundation or the Environmental Protection Agency, and the views expressed are solely those of the authors. We are grateful to David Ciplet, Anne Figert, Leah Greenblum, Elizabeth Hoover, Tania Jenkins, Mercedes Lyson, Stephanie Malin, Bindu Panikkar, Sara Shostak, and Tyson Smith, as well as three anonymous Reviewers and the Editor, for helpful comments on drafts of this article. Finally, we express our sincere thanks to all those we have interviewed as part of this research.

Footnotes

1

Although we classify individuals according to professional categories, we recognize that these categories are fluid and not clear-cut. For example, some activists have scientific backgrounds; for example, one environmental health activist has a PhD in genetics.

REFERENCES

  1. Abelson Philip H. The Tris Controversy. Science. 1977;197(4299):113. doi: 10.1126/science.197.4299.113. [DOI] [PubMed] [Google Scholar]
  2. Altman Rebecca G., Morello-Frosch Rachel, Brody Julia G., Rudel Ruthann, Brown Phil, Averick Mara. Pollution Comes Home and Gets Personal: Women’s Experience of Household Chemical Exposure. Journal of Health & Social Behavior. 2008;49(4):417–435. doi: 10.1177/002214650804900404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bahadori Tina, Phillips Richard, Money Chris, Quackenboss James, Clewell Harvey, Bus James, Robison Steven, Humphris Colin, Parekh Ami, Osborn Kimberly, Kauffman Rebecca. Making Sense of Human Biomonitoring Data: Findings and Recommendations of a Workshop. Journal of Exposure Science and Environmental Epidemiology. 2007;17:308–313. doi: 10.1038/sj.jes.7500581. [DOI] [PubMed] [Google Scholar]
  4. Bauer Susanne. Societal and Ethical Issues in Human Biomonitoring: A View from Science Studies. Environmental Health. 2008;7(Supplement 1):S10d. doi: 10.1186/1476-069X-7-S1-S10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bearman Peter. Introduction: Exploring Genetics and Social Structure. American Journal of Sociology. 2008;114(S1):v–x. [Google Scholar]
  6. Beck Ulrich. World Risk Society. Polity Press; Malden, MA: 1999. [Google Scholar]
  7. Beck Ulrich. Risk Society: Towards a New Modernity. Sage Publications; London: 1992. [Google Scholar]
  8. Beck Ulrich, Giddens Anthony, Lash Scott. Reflexive Modernization: Politics, Tradition and Aesthetics in the Modern Social Order. Stanford University Press; Stanford, CA: 1994. [Google Scholar]
  9. Betts Kellyn. Glut of Data on “new” Flame Retardant Documents its Presence all Over the World. Environmental Science & Technology. 2009;43(2):236–237. [Google Scholar]
  10. Betts Kellyn. PBDEs and the Environmental Intervention Time Lag. Environmental Science & Technology. 2004;38(20):386A–387A. doi: 10.1021/es040647+. [DOI] [PubMed] [Google Scholar]
  11. Blee Kathleen, Currier Ashley. Ethics Beyond the IRB: An Introductory Essay. Qualitative Sociology. 2011;34(3):401–413. [Google Scholar]
  12. Bingham Eula. The ‘Right-to-Know’ Movement. American Journal of Public Health. 1983;73(11):1302. doi: 10.2105/ajph.73.11.1302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Blum Arlene. The Fire Retardant Dilemma. Science. 2007;318(5848):194b–195. doi: 10.1126/science.318.5848.194b. [DOI] [PubMed] [Google Scholar]
  14. Blum Arlene, Ames Bruce N. Flame-Retardant Additives as Possible Cancer Hazards. Science. 1977;195(4273):17–23. doi: 10.1126/science.831254. [DOI] [PubMed] [Google Scholar]
  15. Booth TC, Jackson A, Wardlaw JM, Taylor SA, Waldman AD. Incidental Findings Found in “Healthy” Volunteers during Imaging Performed for Research: Current Legal and Ethical Implications. British Journal of Radiology. 2010;83(990):456–465. doi: 10.1259/bjr/15877332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Bradbury Judith. The Policy Implications of Differing Concepts of Risk. Science, Technology, & Human Values. 1989;14(4):380–399. [Google Scholar]
  17. Brody Julia, Morello-Frosch Rachel, Brown Phil, Rudel Ruthann, Altman Rebecca G., Frye Margaret, Osimo Cheryl, Perez Carla, Seryak Liesel. Is it Safe? New Ethics for Reporting Personal Exposures to Environmental Chemicals. American Journal of Public Health. 2007;97:1547–1554. doi: 10.2105/AJPH.2006.094813. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Brown Phil, Cordner Alissa. Lessons Learned from Flame Retardant Use and Regulation Could Enhance Future Control of Potentially Hazardous Chemicals. Health Affairs. 2011;2011;30(5):1–9. doi: 10.1377/hlthaff.2010.1228. [DOI] [PubMed] [Google Scholar]
  19. Brown Phil, McCormick Sabrina, Mayer Brian, Zavestoski Stephen, Morello-Frosch Rachel, Gasior Altman Rebecca, Senier Laura. ‘A Lab of Our Own’: Environmental Causation of Breast Cancer and Challenges to the Dominant Epidemiological Paradigm. Science, Technology, and Human Values. 2006;31(5):499–536. [Google Scholar]
  20. Brown Phil, Mikkelsen Edwin J. No Safe Place: Toxic Waste, Leukemia, and Community Action. University of California Press; Berkeley, CA: 1990. [Google Scholar]
  21. Burawoy Michael. For Public Sociology. American Sociological Review. 2005;70(1):4–28. [Google Scholar]
  22. Callon Michel, Millo Yuval, Muniesa Fabian., editors. Market Devices. Blackwell; Malden, MA: 2007. [Google Scholar]
  23. Centers for Disease Control and Prevention . Fourth National Report on Human Exposure to Environmental Chemicals. CDC; Atlanta, GA: 2009. [Google Scholar]
  24. Chevrier Jonathan, Harley Kim, Bradman Asa, Gharbi Myriam, Sjödin Andreas, Eskenazi Brenda. Polybrominated Diphenylether (PBDE) Flame Retardants and Thyroid Hormone during Pregnancy. Environmental Health Perspectives. 2010;118(10):1444–1449. doi: 10.1289/ehp.1001905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. CIOMS . International Ethical Guidelines for Biomedical Research Involving Human Subjects. CIOMS and WHO; Geneva: 2002. [PubMed] [Google Scholar]
  26. Collins HM, Evans Robert. The Third Wave of Science Studies: Studies of Expertise and Experience. Social Studies of Science. 2002;32(2):235–296. [Google Scholar]
  27. Consumer Products Safety Commission . News from CPSC: CSPS Bans TRIS-Treated Children’s Garments. Release # 77-030. Office of Information and Public Affairs; Washington, D.C.: 1977. [Google Scholar]
  28. Cordner Alissa David Ciplet, Brown Phil, Morello-Frosch Rachel. Reflexive Research Ethics for Environmental Health and Justice. Social Movement Studies. 2012;2012;1:161–176. doi: 10.1080/14742837.2012.664898. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Delbanco Tom, Walker Jan, Darer Jonathan D., Elmore Joann G., Feldman Henry J., Leveille Suzanne G., Ralston James D., Ross Stephen E., Vodicka Elisabeth, Weber Valerie D. Open Notes: Doctors and Patients Signing On. Annals of Internal Medicine. 2010;153(2):121–125. doi: 10.7326/0003-4819-153-2-201007200-00008. [DOI] [PubMed] [Google Scholar]
  30. Eckley Noelle, Selin Henrik. All Talk, Little Action: Precaution and European Chemicals Regulation. Journal of European Public Policy. 2004;11(1):78–105. [Google Scholar]
  31. Egginton Joyce. The Poisoning of Michigan. Norton; New York: 2009. 1980. [Google Scholar]
  32. Erikson Kai. Everything in its Path: Destruction of Community in the Buffalo Creek Flood. Simon and Schuster; New York: 1976. [Google Scholar]
  33. Featherstone Katie, Atkinson Paul, Bharadwaj Aditya, Clarke Angus. Risky Relations: Family, Kinship and the New Genetics. Oxford; New York: 2006. [Google Scholar]
  34. Fenton SE, Condon A.S. ME, LaKind JS, Mason A, McDiarmid M, Qian Z, Selevan SG. Collection and use of Exposure Data from Human Milk Biomonitoring in the United States. Journal of Toxicology and Environmental Health. 2005;68(20):1691–712. doi: 10.1080/15287390500225708. [DOI] [PubMed] [Google Scholar]
  35. Fink U, Hajduk F, Wei Y, Mori H. Flame Retardants. SRI Consulting, Specialty Chemicals; 2008. [Google Scholar]
  36. Freudenburg William, Pastor Susan. Public Responses to Technological Risks: Toward a Sociological Perspective. The Sociological Quarterly. 1992;33(3):389-389–512. [Google Scholar]
  37. Frickel Scott, Gross Neil. A General Theory of Scientific/Intellectual Movements. American Sociological Review. 2005;70:204–232. [Google Scholar]
  38. Frickel Scott, Moore Kelly., editors. The New Political Sociology of Science. University of Wisconsin Press; Madison, WI: 2006. [Google Scholar]
  39. Frickel Scott, Gibbon Sahra, Howard Jeff, Kempner Joanna, Ottinger Gwen, Hess David. Undone Science: Charting Social Movement and Civil Society Challenges to Research Agenda Setting. Science, Technology, & Human Values. 2010;35(4):444–476. doi: 10.1177/0162243909345836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Geraghty S, Khoury J, Morrow A, Lanphear B. Reporting Individual Test Results of Environmental Chemicals in Breastmilk: Potential for Premature Weaning. Breastfeeding Medicine. 2008;3(4):207–213. doi: 10.1089/bfm.2008.0120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Giddens Anthony. Risk and Responsibility. The Modern Law Review. 1999;62(1):1–10. [Google Scholar]
  42. Gieryn Thomas. Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists. American Sociological Review. 48(6):781–795. [Google Scholar]
  43. Gross Matthias. The Unknown in Process: Dynamic Connections of Ignorance, Non-Knowledge and Related Concepts. Current Sociology. 2007;55:742–759. [Google Scholar]
  44. Gross-Loh C. Breastfeeding, Biomonitoring, and the Media. Mothering. 2004;122:61–65. [Google Scholar]
  45. Harley Kim, Marks Amy, Chevrier Jonathan, Bradman Asa, Sjödin Andreas, Eskenazi Brenda. PBDE Concentrations in Women’s Serum and Fecundability. Environmental Health Perspectives. 2010;118(5):669–704. doi: 10.1289/ehp.0901450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Herbstman Julie, Sjödin Andreas, Kurzon Matthew, Lederman Sally, Jones Richard, Rauh Virginia, Needham Larry, Tang Deliang, Niedzwiecki Megan, Wang Richard, Perera Frederica. Prenatal Exposure to PBDEs and Neurodevelopment. Environmental Health Perspectives. 2010;118(5):712–719. doi: 10.1289/ehp.0901340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Hess David. The Potentials and Limitations of Civil Society Research: Getting Undone Science done. Sociological Inquiry. 2009;3:306–327. [Google Scholar]
  48. HHS . The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Department of Health & Human Services; Washington, DC: 1979. [PubMed] [Google Scholar]
  49. Hites Ronald A. Polybrominated Diphenyl Ethers in the Environment and in People: A Meta-Analysis of Concentrations. Environmental Science & Technology. 2004;38(4):945–956. doi: 10.1021/es035082g. [DOI] [PubMed] [Google Scholar]
  50. Hooper Kim, She J. Lessons from the Polybrominated Diphenyl Ethers (PBDEs): Precautionary Principle, Primary Prevention, and the Value of Community-Based Body-Burden Monitoring using Breast Milk. Environmental Health Perspectives. 2003;111:109–114. doi: 10.1289/ehp.5438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Hooper Kim, McDonald Thomas. PBDEs: An Emerging Environmental Challenge and another Reason for Breast-Milk Monitoring Programs. Environmental Health Perspectives. 2000;108(5):387–392. doi: 10.1289/ehp.00108387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. ISEE . Ethics Guidelines for Environmental Epidemiologists. International Society for Environmental Epidemiology; 2012. [Google Scholar]
  53. Jasanoff Sheila S. Contested Boundaries in Policy-Relevant Science. Social Studies of Science. 1987;17(2):195–230. [Google Scholar]
  54. Jorissen Joanne. Literature Review: Outcomes Associated with Postnatal Exposure to Polychlorinated Biphenyls Via Breast Milk. Advances in Neonatal Care. 2007;7(5):230–237. doi: 10.1097/01.ANC.0000296630.03798.ba. [DOI] [PubMed] [Google Scholar]
  55. Kasperson Roger, Kasperson Jeanne. The Social Amplification and Attenuation of Risk. Annals of the American Academy of Political and Social Science. 1996;545(1):95–105. [Google Scholar]
  56. Kinchy Abby J. Anti-Genetic Engineering Activism and Scientized Politics in the Case of ‘contaminated’ Mexican Maize. Agriculture and Human Values. 2010;27:505–517. [Google Scholar]
  57. Kinchy Abby J., Kleinman Daniel L. Organizing Credibility: Discursive and Organizational Orthodoxy on the Borders of Ecology and Politics. Social Studies of Science. 2003;33(6):869–896. [Google Scholar]
  58. Kleinman Daniel. Impure Culture: University Biology and the World of Commerce. University of Wisconsin Press; Madison, WI: 2003. [Google Scholar]
  59. Kuhn Thomas S. The Structure of Scientific Revolutions. University of Chicago Press; Chicago: 1970. [Google Scholar]
  60. Latour Bruno. Science in Action: How to Follow Scientists and Engineers through Society. Harvard University Press; Cambridge, MA: 1987. [Google Scholar]
  61. Lerner Steve. Sacrifice Zones: The Front Lines of Toxic Chemical Exposure in the United States. MIT Press; Cambridge, MA: 2010. [Google Scholar]
  62. Levine Adeline. Love Canal: Science, Politics, and People. Lexington Books; Lexington, MA: 1982. [Google Scholar]
  63. MacGillivray Brian H., Alcock Ruth, Busby Jerry. Is Risk-Based Regulation Feasible? The Case of Polybrominated Diphenyl Ethers. Risk Analysis. 2011;31(2):266–281. doi: 10.1111/j.1539-6924.2010.01500.x. [DOI] [PubMed] [Google Scholar]
  64. Meeker John, Stapleton Heather. House Dust Concentrations of Organophosphate Flame Retardants in Relation to Hormone Levels and Semen Quality Parameters. Environmental Health Perspectives. 2010;118(3):318–323. doi: 10.1289/ehp.0901332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Meironyte D, Noren K, Bergman A. Analysis of PBDEs in Swedish Human Milk, 1972-1997. Journal of Toxicology and Environmental Health. 1999;58(6):329–341. doi: 10.1080/009841099157197. [DOI] [PubMed] [Google Scholar]
  66. Merton Robert. Sociology of Science. University of Chicago Press; Chicago: 1942. [Google Scholar]
  67. Messer Anne. Mini-Review: Polybrominated Diphenyl Ether (PBDE) Flame Retardants as Potential Autism Risk Factors. Physiology & Behavior. 2010;100:245–249. doi: 10.1016/j.physbeh.2010.01.011. [DOI] [PubMed] [Google Scholar]
  68. Michaels David, Monforton Celeste. Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment. American Journal of Public Health. 2005;95(S1):S39–48. doi: 10.2105/AJPH.2004.043059. [DOI] [PubMed] [Google Scholar]
  69. Minkler Meredith. Ethical Challenges for the ‘Outside’ Researcher in Community-Based Participatory Research. Health Education & Behavior. 2004;31:684–697. doi: 10.1177/1090198104269566. [DOI] [PubMed] [Google Scholar]
  70. Minkler Meredith, Wallerstein N. Community-Based Participatory Research for Health: From Process to Outcome. Jossey-Bass; San Francisco: 2008. [Google Scholar]
  71. Moore Kelly. Disrupting Science: Social Movements, American Scientists, and the Politics of the Military, 1945-1975. Princeton University Press; Princeton, NJ: 2008. [Google Scholar]
  72. Moore Kelly. Organizing Integrity: American Science and the Creation of Public Interest Organizations, 1955-1975. The American Journal of Sociology. 1996;101(6):1592–1627. doi: 10.1086/230868. [DOI] [PubMed] [Google Scholar]
  73. Morello-Frosch Rachel, Brody Julia, Brown Phil, Altman Rebecca, Rudel Ruthann, Perez Carla. Toxic Ignorance and Right-to-Know in Biomonitoring Results Communication: A Survey of Scientists and Study Participants. Environmental Health. 2009;8(1):6. doi: 10.1186/1476-069X-8-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Morello-Frosch Rachel, Zavestoski Stephen, Brown Phil, Altman Rebecca G., McCormick Sabrina, Mayer Brian. Embodied Health Movements: Responses to a ‘Scientized’ World. In: Frickel Scott, Moore. Madison Kelly., editors. The New Political Sociology of Science: Institutions, Networks, and Power. University of Wisconsin Press; Madison, WI: 2006. pp. 244–271. [Google Scholar]
  75. National Academy of Sciences . Human Biomonitoring for Environmental Chemicals. National Research Council; Washington, DC: 2006. [Google Scholar]
  76. National Conference of State Legislatures State Regulation of PBDEs. 2011 Feb; Retrieved July 14, 2011. ( http://www.ncsl.org/?tabid=22145 updated February 2011)
  77. National Research Council . Improving Risk Communication, Report of the Committee on Risk Perception and Communication. National Academy Press; Washington, D.C.: 1989. [Google Scholar]
  78. Nyden Philip, Hossfeld Leslie, Nyden Gendolyn. Public Sociology: Research, Action, and Change. Sage Press; Los Angeles: 2012. [Google Scholar]
  79. Office of Pollution Prevention & Toxics . Design for the Environment Program Alternatives Assessment Criteria for Hazard Evaluation. U.S. EPA; Washington, DC: 2011. [Google Scholar]
  80. Prainsack Barbara, Gurwitz David. ‘Private Fears in Public Places?’ Ethical and Regulatory Concerns Regarding Human Genomic Databases. Personalized Medicine. 2007;4(4):447–452. doi: 10.2217/17410541.4.4.447. [DOI] [PubMed] [Google Scholar]
  81. Rabinow Paul., editor. Ethics: Subjectivity and Truth (Essential Works of Foucault, 1954-1984) New Press; New York: 1997. [Google Scholar]
  82. Reich Michael R. Environmental Politics and Science: The Case of PBB Contamination in Michigan. American Journal of Public Health. 1983;73(3):302–313. doi: 10.2105/ajph.73.3.302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Roze Elise, Meijer Lisethe, Bakkar Attie, Van Braeckel Koenraad, Sauer Pieter, Bos Arend. Prenatal Exposure to Organohalogens, Including Brominated Flame Retardants, Influences Motor, Cognitive, and Behavioral Performance at School Age. Environmental Health Perspectives. 2009;117(12):1953–58. doi: 10.1289/ehp.0901015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Rudel Ruthann A., Perovich Laura J. Endocrine Disrupting Chemicals in Indoor and Outdoor Air. Atmospheric Environment. 2009;43(1):170–181. doi: 10.1016/j.atmosenv.2008.09.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Sandler Ronald. Nanotechnology: The Social and Ethical Issues. Woodrow Wilson international Center for Scholars; Washington, D.C.: 2009. [Google Scholar]
  86. Sandman Peter. Hazard Versus Outrage in the Public Perception of Risk. In: Covello VT, McCallum D, Pavlova M, editors. Effective Risk Communication: The Role and Responsibility of Government and Nongovernment Organizations. Plenum Press; New York: 1989. pp. 45–49. [Google Scholar]
  87. Sellnow Timothy, Ulmer Robert, Seeger Matthew, Littlefield Robert. Multiple Audiences for Risk Messages. Food Microbiology and Food Safety. 2009;1:33–49. [Google Scholar]
  88. Shwed Uri, Bearman Peter. The Temporal Structure of Scientific Consensus Formation. American Sociological Review. 2010;75(6):817–840. doi: 10.1177/0003122410388488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Slovic Paul. Perception of Risk. Science. 1987;236:280–285. doi: 10.1126/science.3563507. [DOI] [PubMed] [Google Scholar]
  90. Smith-Doerr Laurel. Learning to Reflect Or Deflect? U.S. Policies and Graduate Programs’ Ethics Trainings for Life Scientists. In: Frickel Scott, Moore Kelly., editors. New Political Sociology of Science. University of Wisconsin Press; Madison, WI: 2006. pp. 405–431. [Google Scholar]
  91. Stapleton Heather, Kaosterhaus Susan, Eagle Sarah, Fuh Jennifer, Meeker John, Blum Arlene, Webster Thomas. Detection of Organophosphate Flame Retardants in Furniture Foam and U.S. House Dust. Environmental Science & Technology. 2009;43(19):7490–7495. doi: 10.1021/es9014019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Tierney Kathleen. Toward a Critical Sociology of Risk. Sociological Forum. 1999;14(2):215–242. [Google Scholar]
  93. Timmermans Stefan, Oh Hyeyoung. The Continued Social Transformation of the Medical Profession. Journal of Health and Social Behavior. 2010;51(S):S94–S106. doi: 10.1177/0022146510383500. [DOI] [PubMed] [Google Scholar]
  94. U.S. EPA . DecaBDE Phase-Out Initiative. U.S. EPA; Washington, DC: Dec 30, 2009. Retrieved January 6, 2010. ( http://www.epa.gov/oppt/existingchemicals/pubs/actionplans/deccadbe.html) [Google Scholar]
  95. U.S. EPA . Scientific and Ethical Approaches for Observational Exposure Studies. National Exposure Research Laboratory; 2008. 600/R-08/062. [Google Scholar]
  96. U.S. EPA . Polybrominated Diphenyl Ethers (PBDEs) Project Plan. U.S. EPA; Mar, 2006. [Google Scholar]
  97. Vandermoere Frederic. Hazard Perception, Risk Perception, and the Need for Decontamination by Residents Exposed to Soil Pollution: The Role of Sustainability and the Limits of Expert Knowledge. Risk Analysis. 2008;28(2):387-387–398. doi: 10.1111/j.1539-6924.2008.01025.x. [DOI] [PubMed] [Google Scholar]
  98. WHO . IARC Code of Good Scientific Practice. International Agency for Research on Cancer; Geneva, Switzerland: 2008. [Google Scholar]
  99. WMA . Declaration of Helsinki. World Medical Association; Helsinki, Finland: 2008. 1964. [Google Scholar]
  100. Wu Nerissa, McClean Michael, Brown Phil, Aschengrau Ann, Webster Thomas. Participant Experiences in a Breastmilk Biomonitoring Study: A Qualitative Assessment. Environmental Health. 2009;8(4) doi: 10.1186/1476-069X-8-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Wynne Brian. Risk, Environment and Modernity. Sage Publications; London: 1996. May the sheep safely graze? A reflexive view of the expert. [Google Scholar]

RESOURCES