Skip to main content
EPA Author Manuscripts logoLink to EPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Jun 24.
Published in final edited form as: Integr Environ Assess Manag. 2019 Feb 28;15(3):320–344. doi: 10.1002/ieam.4119

Scientific integrity issues in Environmental Toxicology and Chemistry: Improving research reproducibility, credibility, and transparency

Christopher A Mebane 1, John P Sumpter 2, Anne Fairbrother 3, Thomas P Augspurger 4, Timothy J Canfield 5, William L Goodfellow 6, Patrick D Guiney 7, Anne LeHuray 8, Lorraine Maltby 9, David B Mayfield 10, Michael J McLaughlin 11, Lisa S Ortego 12, Tamar Schlekat 13, Richard P Scroggins 14, Tim A Verslycke 15
PMCID: PMC7313240  NIHMSID: NIHMS1596626  PMID: 30609273

Abstract

High-profile reports of detrimental scientific practices leading to retractions in the scientific literature contribute to lack of trust in scientific experts. Although the bulk of these have been in the literature of other disciplines, environmental toxicology and chemistry are not free from problems. While we believe that egregious misconduct such as fraud, fabrication of data, or plagiarism is rare, scientific integrity is much broader than the absence of misconduct. We are more concerned with more commonly encountered and nuanced issues such as poor reliability and bias. We review a range of topics including conflicts of interests, competing interests, some particularly challenging situations, reproducibility, bias, and other attributes of ecotoxicological studies that enhance or detract from scientific credibility. Our vision of scientific integrity encourages a self-correcting culture that promotes scientific rigor, relevant reproducible research, transparency in competing interests, methods and results, and education.

Keywords: Reproducibility, Bias, Transparency, Research integrity, Scientific integrity

INTRODUCTION

Large segments of society are distrustful of scientific and other experts. Some have suggested that we are in a culture in which reality is defined by the observer and objective facts do not change peoples’ minds, and those that conflict with one’s beliefs are justifiably questionable (Campbell and Friesen 2015; Nichols 2017; Vosoughi et al. 2018). Science and scientists have been central to these debates, and the boundaries of science, policy, and politics may be indistinct. In a social climate skeptical of science, the easy availability of numerous reports of dubious scientific practices gives fodder to skeptics. Because environmental regulations on use of chemicals and waste management rely heavily on the disciplines of ecotoxicology and chemistry, the integrity of the science is of utmost importance. Here we discuss scientific integrity in the applied environmental sciences, with a focus on ecotoxicology and how the role and culture of the Society of Environmental Toxicology and Chemistry (SETAC) may influence such issues.

Science has long endured questionable science practices and a skeptical public. Galileo’s criticisms of prevailing beliefs resulted in his issuing a public retraction of his seminal work. In contrast, purported science “discoveries” such as Piltdown Man, canals on Mars, cold fusion, archaeoraptor, homeopathic water with memory, arsenic-based life, and many others have not stood the test of time (Gardner 1989; Schiermeier 2012). By 1954, Huff and Geis (1954) illustrated how the presentation of scientific data could be manipulated to become completely misleading yet accurate. Are things worse now? Recent articles in both the scientific literature and popular print and broadcast venues paint a bleak picture of the status of science. One does not have to search hard to find plenty of published concerns about the credibility of science. These include overstated and unreliable results (Ioannidis 2005; Harris and Sumpter 2015; Henderson and Thomson 2017), conflicts of interest (McGarity and Wagner 2008; Stokstad 2012; Boone et al. 2014; Oreskes et al. 2015; Tollefson 2015), profound bias (Atkinson and Macdonald 2010; Bes-Rastrollo et al. 2014; Suter and Cormier 2015a, 2015b), suppression of results to protect financial interests (Wadman 1997; Wise 1997), deliberate misinformation campaigns as a public relations strategy for financial or ideological aims (Baba et al. 2005; McGarity and Wagner 2008; Gleick and 252 coauthors 2010; Oreskes and Conway 2011), political interference with or suppression of results from government scientists (Hutchings 1997; Stedeford 2007; Ogden 2016), self-promotion and sabotage of rivals in hypercompetitive settings (Martinson et al. 2005; Edwards and Roy 2016; Ross 2017), publication bias, peer review and authorship games (Young et al. 2008; Fanelli 2012; Callaway 2015;), selective reporting of data or adjusting the questions to fit the data (Fraser et al. 2018), overhyped institutional press releases that are incommensurate with the actual science behind them (Cope and Allison 2009; Sumner et al. 2014), dodgy journals (Bohannon 2013), and dodgy conferences (Van Noorden 2014).

Such published concerns reasonably raise doubts about science and scientists and could even lead some to conclude that the contemporary system of science is broken. In writing this article, we attempt to address some prominent science integrity concerns in the context of environmental toxicology and chemistry. In our view, there is ample room for improvement within our discipline, but the science is not broken, and some criticisms are overstated. In writing this article, we do not pretend to have solutions that will overturn insidious pressures on scientists and funders for impressive results, or to hold some moral high ground that makes us immune from such pressures ourselves, or that all of our own works are above reproach. Our recommendations are pragmatic, not dogmatic. Our goal is to nudge practices and pressures on scientists to advance the science, while maintaining and improving credibility through transparency, ongoing review, and self-correction.

Many of the prominent science integrity controversies have been in the high stakes biomedical discipline, and in response that discipline probably has done more self-evaluation and taken more steps toward best practices than most other disciplines. Results of self-reported, anonymous surveys of scientists, mostly in the biomedical fields, have not been reassuring. In a 2002 survey of early and midcareer scientists, 0.3% admitted to falsification of data, 6% to a failure to present conflicting evidence, and 16% to changing of study design, methodology, or results in response to funder pressure (Martinson et al. 2005). A subsequent metaanalysis of surveys suggested problems were more common, with close to 2% of scientists admitting to having been involved in serious misconduct, and more than 70% reporting that they personally knew of colleagues who committed less severe detrimental research practices (Fanelli 2009). Overt misconduct can occur in ecotoxicology just as with any discipline (Marshall 1983; Keith 2015; Enserink 2017; http://retractiondatabase.org, search term “toxicology”) and when exposed, is universally condemned and, in many countries, is career ending. In contrast, the ambiguous, more nuanced issues of science integrity that all of us are likely to experience in our careers require thoughtful consideration, not condemnation. It is in regard to the latter that we discuss efforts toward remedies from other disciplines to examine similar issues in ecotoxicology, focusing on SETAC.

WHAT IS “SCIENCE” IN THE CONTEXT OF SCIENTIFIC INTEGRITY?

Before we can discuss integrity in ecotoxicology and related environmental science fields, we must first distinguish what is meant by “science” in this context. Broadly speaking, environmental science includes the disciplines of biology, ecology, chemistry, physics, geology, limnology, mineralogy, marine studies, and atmospheric studies, that is, the study of the natural world and its interconnections. The applications of environmental science extend to agriculture, fisheries management, forestry, natural resource conservation, and chemicals management, all of which have associated multibillion-dollar industries and vocal environmental advocacy groups. The subdiscipline of environmental toxicology or ecotoxicology, pursued by SETAC scientists, studies in great detail how the natural world is influenced by chemicals, both natural and synthetic, introduced by human endeavors that are largely in pursuit of the production of desired goods and services (food, clean water, plastic products, metals, etc.). Because exposure to chemicals can have negative and sometimes unexpected consequences for people and the environment, a body of regulation has developed over the past century to control the kinds and amounts of allowable chemical exposures. Such regulations necessarily are based on scientific concepts such as Paracelsus’ directive that “the dose makes the poison” and physicochemical properties that influence transport and fate of substances. Because of the complexity, inexactitude, and uncertainty of ecotoxicology and associated sciences, rulemaking often is subject to challenge, leading to accusations of profit over people or the environment or unreasonably restrictive and burdensome requirements. Scientists are called upon to inform disputes based on their knowledge or underlying principles or enter the conversation through self-initiated in-depth literature review and commentary. Only by conscientiously adhering to fundamental principles of the scientific method can environmental scientists maintain their integrity and continue to play a valid role in environmental policy and management.

WHAT IS “SCIENTIFIC INTEGRITY”?

Impeccable honesty is a fundamental tenet of science. When we read a paper, we might not agree with the conclusions, authors’ interpretations of its implications, importance, or many other things, but we have to be confident that the procedures described were indeed followed and all relevant data were shown, not just those that fit the hypothesis. As Goodstein (1995) put it: “There are, to be sure, minor deceptions in virtually all scientific papers, as there are in all other aspects of human life. For example, scientific papers typically describe investigations as they logically should have been done rather than as they actually were done. False steps, blind alleys and outright mistakes are usually omitted once the results are in and the whole experiment can be seen in proper perspective.” Indeed, no one wants to read the chronology of a study. However, should for example, the omissions include unfruitful statistical fishing trips or anomalous data that were assumed to be in error because they did not fit expectations, such little omissions may bias the story and the body of literature.

Various professional and governmental organizations have established policies and definitions prescribing research integrity, responsible conduct of science, or scientific integrity. These may include broad statements of attributes such as the US National Academy of Science’s (NAS) 6 values that they considered most influential in shaping the norms that constitute research practices and relationships and the integrity of science: objectivity, honesty, openness, accountability, fairness, and stewardship (NAS 2017). More specific “research integrity” guidelines define appropriate expectations of individual researchers and their institutions and may be highly procedural. Protecting the privacy, rights, and safety of human research participants and animal welfare with institutional review board clearance requirements is a common element of research integrity guidelines. Academic research integrity guidelines have been established individually or in aggregate by research funders and individual institutions (Goodstein 1995; NRC 2002; Steneck 2006; ARC 2007; Resnik and Shamoo 2011; NRC-CNRC ). In most countries, research institutions are usually responsible for investigating potential breaches of research integrity by their scientists, although this can create difficult conflicts of interest for the institution (Glanz and Armendariz 2017). A prominent recent exception is China, which announced reforms that no longer allow institutions to handle their own misconduct investigations (Cyranoski 2018).

Whether research integrity guidelines should best be defined narrowly or broadly has been an area of controversy. As of 2015, 22 of the world’s top 40 research countries had national research conduct policies, all of which included fabrication, falsification, and plagiarism (FFP), with some going further. In this context, “fabrication” is making up data; “falsification” includes manipulating studies or changing or omitting data such that the record does not accurately reflect the actual research; and “plagiarism” includes the appropriation of another person’s ideas, methods, results, or words without giving appropriate credit (ORI 2018). The Research Councils of the United Kingdom has a lengthy list of misdeeds, including FFP, misrepresentation, breach of duty of care, and improper dealing with allegations of misconduct, with many subcategories (NAS 2017). In contrast, from the 1980s to 2000, the National Science Foundation (US) had defined serious science misconduct broadly to include “…fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting and reporting research” (Goodstein 1995). The controversial part was the catchall phrase “practices that seriously deviate from those commonly accepted…” To the stewards of public science funds, such a catchall phrase was preferable to an itemized list of all potential avenues of mischief, yet it raised the specter of penalizing scientists who strayed too far from orthodox thought (Goodstein 1995). In 2000, this definition of disbarring research misconduct was narrowed to just “fabrication, falsification, or plagiarism in proposing, performing, or reporting research” with lesser offenses classified as questionable research practices. Other misconduct was defined as “forms of unacceptable behavior that are clearly not unique to the conduct of science, although they may occur in the laboratory or research environment.” Yet only FFP research misconduct findings were subject to reporting requirements to federal science funding agencies, with questionable science practices or other misconduct handled locally (Resnik et al. 2015; NAS 2017).

In many countries, there is an active debate about whether a legal definition is appropriate for something that is really an academic judgment rather than a legal one. Denmark recently similarly narrowed its broad definitions of research misconduct to only FFP following high-profile cases in which scientists succeeded in having their academic misconduct findings overturned in the courts. Yet if research conduct policies are considered “academic” without legal weight, institutions may have difficulty enforcing polices, such as when deliberate intent is required to be shown and the researcher claims “honest mistake.” For instance, the US Office of Research Integrity found that a tenured professor had committed research misconduct by inappropriately altering data in 5 images from 3 papers. Yet when the university sought to terminate her, she fought back contesting the university’s procedures, and the university ultimately paid her US$100 000 to leave (Stern 2017). In private research, it is not obvious which scientific integrity concepts have the force of law. In an example from the United States, testimony of egregious breaches of scientific integrity norms (including faking credentials and selective publication of only favorable results) was disallowed in a court dispute between 2 private companies because there was no federal law on scientific integrity (Krimsky 2003).

Science is a human endeavor and the “other misconduct” that scientists may commit is diverse and may be horrific, such as bullying and abuse of power; taking advantage of students or subordinates; sexual coercion, assault, or harassment; misuse of funds; sabotage; specious whistleblowing or retaliation against valid whistleblowers; and poisoning coworkers (e.g., Gibbons 2014; Ghorayshi 2016; Else 2018; Extance 2018; other examples on http://retractionwatch.com). The exclusion of such malfeasance from “research misconduct” has been questioned. For example, a researcher who failed to meet her study objectives after being sabotaged by a rival argued that she was further penalized by being instructed not to divulge the reason for her study failures to her funders (Enserink 2014). In contrast, institutions often do go beyond the minimum “FFP” definition in their policies (Resnik et al. 2015), which has led to objections of conflation of egregious misconduct such as fraud with failure to comply with administrative requirements that did not compromise data validity (Couzin-Frankel 2017).

The US National Academy of Sciences (NAS 2017) recently argued that the definitions of research misconduct as fabrication, falsification, or plagiarism were too narrow. In particular, questionable research practices were more than just “questionable,” but were clear violations of the fundamental tenets of research and were given a less ambiguous label of “detrimental.” Consensus detrimental research practices were as follows:

  1. Detrimental authorship practices that may not be considered misconduct, such as honorary authorship, demanding authorship in return for access to previously collected data or materials, or denying authorship to those who deserve to be designated as authors. (Here we think it is important to distinguish between pairing a data reuse with an invitation to collaborate and share authorship versus demanding authorship as a condition of data access [Duke and Porter 2013]).

  2. Not retaining or making data, code, or other information or materials underlying research results available as specified in institutional or sponsor policies, or standard practices in the field.

  3. Neglectful or exploitative supervision in research.

  4. Misleading statistical analysis that falls short of falsification.

  5. Inadequate institutional policies, procedures, or capacity to foster research integrity and address research misconduct allegations, and deficient implementation of policies and procedures.

  6. Abusive or irresponsible publication practices by journal editors and peer reviewers (NAS 2017).

The term “scientific integrity” is sometimes used synonymously with research integrity. However in recent usage, the term has included insulation of science from political interference, manipulation, or suppression of science (Doremus 2007; Douglas 2014). The term “scientific integrity” has been used in government science policy in the United States. There, scientific integrity guidelines were developed in an overarching sense that includes research integrity at the individual and institutional level but were also intended to protect federal scientists from political interferences. Political officials were not to alter or suppress scientific findings, and transparency was encouraged in the preparation of the government-supported scientific research (Obama 2009; Stein and Eilperin 2010). The scientific integrity guidelines in the United States were followed by derivative policies intended to put substance to the transparency provisions, requiring open access to federally funded research articles and more importantly, requiring archiving and public availability of the underlying raw data (Holdren 2013). These broad policies become more specific and procedural in government science agencies, and expanded to codes of scholarly and scientific conduct such as a list of 19 principles for the US Department of Interior (USDOI 2014).

We expect the vast majority of scientists consider themselves to hold science integrity, as self-defined in terms of honesty, transparency, and objectivity, sticking to the research question and avoiding bias in data interpretation (e.g., Shaw and Satalkar 2018). Yet most scientists will encounter ethically ambiguous situations. For instance, some may feel that they struggle to advance science against a rising tide of administrative requirements accompanied by declining support for science and increasing competition for funding. When does cutting through bureaucratic institutional requirements cross the line from being commendable efficiency to violating research integrity rules? Using grant or project funds for unrelated purchases or conference travel? Should minor misbehaviors such as posting ones’ article on a website after signing a publication and copyright transfer agreement with the publisher agreeing not to do so still be considered misbehaviors when done by many? When does cleaning data become cooking data when, for example, anomalous values are suppressed? There are many ethically ambiguous situations in which scientists may consider that doing the “right thing” (compliance with all rules) might need to be balanced with doing the “good thing,” especially when the welfare of others such as students or subordinates is involved (Johnson and Ecklund 2016).

To us, scientific integrity can be simplified to cultures of personal integrity plus a few profession-specific provisions of transparency and reproducibility. At their roots, these norms are those children are hopefully acculturated to in primary school: Tell the truth, and tell the whole truth (no data sanitizing, selective reporting, and report all conflicts); tell both sides of the story (avoid bias); do your own work (no plagiarism); read the book, not just the back cover before writing your report (properly research and cite primary sources); show your work for full credit (transparency); practice makes perfect (rigor); share (publish your work and data in peer-reviewed outlets for collective learning); and listen (with humility and collegial fraternity to observations and suggestions of others). Finally, the golden rule “do unto others as you would have them do unto you” should resonate throughout the professional interactions of environmental scientists, and especially in peer reviewing and data sharing. When encountering an inevitable science dispute, keep criticisms objective, constructive, and focused on the work and not the worker; do peer reviews of your rivals’ work as you would hope to receive reviews of your own, reward and recognize good behavior in science, and so on.

THE INTERESTED SCIENTIST: CONFLICTS OF INTEREST, COMPETING INTERESTS, AND BIAS

Although we would like to believe that outright fraud or deliberate campaigns to manipulate science are rare in the environmental sciences, at some points in their careers almost every practicing scientist must grapple with questions of conflicting or competing interests and must guard against bias in approaches and interpretation.

The term “conflicts of interest” is commonly narrowly defined to financial conflicts. One definition is “a set of conditions in which professional judgment concerning a primary interest tends to be or could be perceived to be unduly influenced by a secondary interest (such as financial gain). More simply, a conflict of interest is any financial arrangement that compromises, has the capacity to compromise, or has the appearance of compromising trust (Krimsky 2003, 2007). The term “competing interests” is often used where nonfinancial factors compete with objectivity, such as allegiances, personal friendships or dislikes, career advancement, having taken public stances on an issue, or political, academic, ideological, or religious affiliations (PLoS Medicine Editors 2008; Nature Editors 2018b). Bias in study design or data interpretation may arise from either conflicts or competing interests and can be either overt or unrecognized by the scientist (Suter and Cormier 2015b).

Generally, the concern over conflicting or competing interests in science is that secondary interests such as financial gain or maintaining professional relationships compromise the primary interest of upholding scientific norms such as reporting data accurately and completely, interpreting data appropriately, and acknowledging value judgments or interpretive assumptions (Elliott 2014). Conflict of interest policies may be better developed in the biomedical fields than in the applied environmental sciences because the former often involves human participants, and because of the strong financial ties between academia and the pharmaceutical industry (Tollefson 2015). For instance, if a research team is reporting on the efficacy of a medical device or a drug and they or their employers hold a patent or stand to gain financially from a positive report, then they clearly have a financial conflict of interest (Figure 1). Examples of financial conflicts of interest encountered in common life include the physician who recommends to the patient a medical procedure that would conveniently (and lucratively) be conducted in a private surgery center in which the physician is a part owner, or the financial advisor who receives commissions when clients are steered to financial products offered through their employer. Such arrangements do not mean that the patient’s care will suffer or that bad financial advice will be proffered, but these self-interests compete with the interests of those they serve (Cain et al. 2005).

Figure 1.

Figure 1

Conflicts of interest in science arise when secondary interests such as financial gain or maintaining professional relationships compromise the primary interest of upholding scientific norms such as the objective design, conduct, and interpretation of studies and the open sharing of scientific discoveries to advance our collective learning (Reprinted with permission. © Benita Epstein.)

The mere existence of a potential conflict of interest should not alone throw results in doubt when it is disclosed and acknowledged appropriately. However, although most articles in the environmental sciences routinely disclose funding sources that could be perceived as potential conflicts of interest, major omissions have occurred (Oreskes et al. 2015; Ruff 2015; Tollefson 2015; Krimsky and Gillam 2018; McClellan 2018). For instance, the findings of a study on risks of contamination from natural gas extraction from hydraulic fracturing of bedrock were undermined when it came out (apparently unbeknownst to the university) that the research supervisor was being paid 3 times his university salary by serving as an advisor to an oil and gas company invested in the practice. The failure to disclose this financial relationship in the publication brought the study’s objectivity and credibility into question, independent of its substance (Stokstad 2012). Authors and journals have been criticized for gaming ethical financial disclosure requirements, such as by overly narrow disclosures or disclosing a conflict in the cover letter to the editor accompanying the manuscript (which is usually hidden from the reviewers and readers) but not including it in the actual article (Marcus and Oransky 2016).

It should be noted that the severe conflicts of interest that some academic biomedical researchers have created for themselves by setting up business interests to directly and personally profit from their research outcomes (Krimsky 2003) are probably much less of an issue in the environmental sciences. Dual affiliations and the resultant potential for divided loyalties for university researchers have certainly come to light in the environmental sciences, such as if the scientist has a public facing, disinterested, researcher identity but privately has set up spin-off, personal business interests (Stokstad 2012; Fellner 2018). While we are not aware of any systematic review, we think these situations are far less pervasive in the environmental sciences than in biomedicine. Rather, in ecotoxicology and environmental chemistry, the more common (and insidious) concern for authors and institutions is to be self-aware of the potential for funding bias through unconscious internalization of the interests of their research sponsors. The informative value of conflict of interest or funding disclosures varies. The shortest (and least informative) statement we have seen was that “the usual disclaimers apply” (Descamps 2008), while the detailed disclosures in biomedical literature can go on for pages (Baethge 2013; ICMJE 2016). Funding sources can be obscured by channeling funding through intermediaries, such as a critical review of cancer risks from talcum powder funded through a law firm involved in toxic tort litigation (Muscat and Huncharek 2008). Requirements for highly detailed disclosures risk diminishing their importance to that of the “fine print” cautions in commerce that are seldom read. Much like computer software user terms and conditions that have to be clicked past or the ubiquitous consumer product safety stickers that may be written more to avoid product liability claims than for practical safety, detailed conflict of interest disclosures may reach a point of diminishing returns. There is some evidence that overreliance on conflicts disclosure is ineffective or can give moral license to scientists to be biased (Cain et al. 2005). Our view is that true financial conflicts of interests should be avoided, not just disclosed. Yet for most scientists in ecotoxicology and the environmental sciences who sought and received funding in order to pursue studies, simple, unambiguous statements of the funding sources should generally be sufficient.

Nonfinancial factors may also compete with scientific objectivity. Factors or values such as these are usually termed “competing interests” reserving “conflicts of interest” for financial conflicts (Nature Editors 2018b; PLoS Medicine Editors 2008). In our observations, competing interests are rarely mentioned in environmental science publications. Rather, they are often discussed behind the scenes, such as in correspondence between an editor and potential reviewers, along the lines of “Yes, I would be happy to review this article and believe I can be objective; however, you should know that I used to be a labmate of the PI and we collaborated on an article 3 years ago.” Marty et al. (2010) give such an example of a disclosure of competing interests based on personal relationships. Whether or how competing interests or values affect the assumptions and perspectives of scientists should be more formally stated is an area of rich debate in the philosophy of science literature (PLoS Medicine Editors 2008; Douglas 2015; Elliott 2016).

We reiterate our belief that the existence of a potential conflict or competing interests is a ubiquitous part of the environmental science landscape and does not indicate poor science. Most scientists strive to present unbiased data and interpret their data evenhandedly. However, the varied experiences of scientists can influence their perspectives in ways that they may not recognize themselves. The transparency in disclosure reminds the reader to consider perspectives and alternate interpretations when judging the merits of a study, synthesis paper, or risk assessment.

Bias

Many of the published concerns in the environmental science literature come down to cognitive bias. Science is not value free, and personal bias in interpreting science is often related to differing worldviews (Lackey 2001; Douglas 2015; Nuzzo 2015; Elliott 2016). For instance, the collapse of major fisheries that ostensibly had been scientifically managed for sustainable yields helped inspire the precautionary principle. This philosophy sought more cautious management and the reversal of the burden of proof for sustainable exploitation of natural resources (Peterman and M’Gonigle 1992). Those with precautionary principle or risk assessment worldviews may interpret the same set of facts very differently. The precautionary principle adherent may emphasize absence of conclusive evidence of safety, and the risk assessment adherent may emphasize absence of conclusive evidence of harm (Fairbrother and Bennett 1999). In such settings, values and biases are interwoven. Even self-disciplined scientists who seek openness and objectivity carry some biases from experiences and acculturation (here meaning how working in different environmental organizations can lead scientists to modify their perception and thinking). Recognizing sources of bias does not imply ill intent, for just the process of acculturation to a particular place of employment can bias perceptions and inclinations (Figure 2) (Suter and Cormier 2015a, 2015b; Brain et al. 2016).

Figure 2.

Figure 2

Confirmation bias is the tendency to seek and interpret evidence in a way that confirms preexisting beliefs and gives less consideration to alternative hypotheses (Reprinted with permission. © Benita Epstein).

Professional societies such as SETAC can serve as a form of acculturation; some of the authors of this essay have been active members of SETAC for much longer than they have been employed by any single employer. Even self-disciplined scientists who seek openness and objectivity carry biases from their experiences. What becomes particularly difficult to self-regulate is the convergence of cognitive bias, a human nature to seek to please one’s patron, and the interests of one’s employer or client. For instance, studies funded by drug or medical device makers tend to find positive effects that favor the company funding the research (Lexchin et al. 2003; Smith 2006), and the funding effect for studies of chemical toxicity may lean toward finding negative effects (Krimsky 2003, 2013; Bero et al. 2016). However, concordance between a funder’s self-interest and research findings does not alone indicate bias. Alternatively the industry-funded researchers could have deeper knowledge of a drug or chemical than the nonprofit-funded academic researcher who might have less extensive experience, the industry-funded work could have been more thoroughly vetted on the basis of prior internal research, or the industry-funded scientists might have better ability to obtain the resources and skill to carry out well-focused and rigorous research (Krimsky 2013; Macleod 2014). It is doubtful that these influences can be completely separated. To us, disclosure, transparency, and balanced external reviews are presently the best pragmatic approach to managing cognitive biases.

Tit for tat, adversarial claims of bias in the scientific literature doubtfully advance the science. Conflicting perspectives can become personalized and intractable. How to know which is more credible? Neither? Both? Food nutrition researchers pointed out examples of selective data interpretations and publication bias in obesity research in relation to sweetened beverage (soft drink) consumption and in the health benefits of breast feeding. They termed this distortion of information to further what may be perceived to be righteous ends as “white hat bias” (Cope and Allison 2009). However, their conflicted financial backing from the soft drink industry and from manufacturers of baby formula contributed to countercriticisms of funding bias (Bes-Rastrollo et al. 2014; Harris and Patrick 2011; Mandrioli et al. 2016). Unresolved in the claims and counterclaims of bias and financial conflicts of interest was what advice was most credible.

In environmental toxicology as well, controversies over the best interpretation of sometimes ambiguous facts can become entrenched and focused on the people who hold differing views as much as on the evidence behind the different views. Examples include deeply held and personalized disagreements over risks of atrazine to amphibians (Hayes 2004; Solomon et al. 2008; Kintisch 2010; Raloff 2010; Rohr and McCoy 2010; Benderly 2014), sufficiently safe levels of Se for fish and birds (Skorupa et al. 2004; Renner 2005), and a dispute that was maintained for more than 20 y about whether an oil spill resulted in indirect harm to salmon (Burton and Ward 2012). These intractable, mutual-bias criticisms make it very difficult for nonspecialist readers to make informed judgments of which is the more credible science.

Some particularly challenging situations in ecotoxicology

Some situations that seem particularly challenging for researchers and institutions to maintain scientific credibility warrant mention. Elliott (2014) argued that scientific findings that are ambiguous or require a good deal of interpretation or are difficult to establish in an obvious and straightforward manner are prone to bias, particularly if strong incentives to influence research findings in ways that damage the credibility of research are present. In environmental toxicology, risk assessments or critical reviews fit that test and can be vulnerable to bias. Suter and Cormier (2015a) identified sources of bias in ecological risk assessment as including personal bias, regulatory capture, advocacy assessment, biased stakeholder and peer-review processes, preference for standard studies, inappropriate standards of proof, misinterpretation, and ambiguity. These challenges may lead to differences of opinion on methods for drawing conclusions to support decision making that, while prone to bias, have, at their root, the need for drawing conclusions in the face of uncertainty.

Costs of large-scale projects to remediate environments such as sediments contaminated by urban and industrial sources, aged industrial facilities, or large mining operations can be enormous, running to the hundreds of millions of dollars or much more (Gustavson et al. 2007; McKinley 2016). In “polluter-pays” schemes, the potential financial liability associated with such projects could imperil the ongoing viability of companies, which in turn would harm the livelihoods of employees, among other social disruptions. In such a setting, the scientists working on behalf of the those who may have to incur the costs of cleanup might understandably be more cautious about the potential for misguided remediation following Type I error (e.g., falsely discovering environmental degradation) than Type II error (failing to discover degradation when in fact it is occurring), when the science is ambiguous. Conversely, the regulatory scientists entrusted to provide scientific advice to protect environmental quality might be obliged to err on the side of precaution and be more accepting of risk of Type I error, especially when it is “other people’s” money at stake.

While science ethicists and the NAS (NAS 1992; Krimsky 2005; Boden and Ozonoff 2008; Elliott 2014) have emphasized industry funding bias risks, these risks are not unique to industry’s funding of science. For example, many countries have provisions for natural resource damage assessment and restoration (NRDAR) to compensate the public for lost opportunities following shipwrecks, oil spills, releases of industrial chemicals, and so on (Flamini et al. 2004; Descamps 2008; Boehm and Ginn 2013; Goldsmith et al. 2014). These assessments rely on science to some degree to establish linkages from the release to harm to the environment. In turn, trustees of natural resources rely on science advisors to assess the extent and scale of injuries (adverse effects) and the monies needed to restore the lost services. In large incidents, the responsible parties will inevitably retain their own science advisors. Complex situations are resolved by either negotiation or adversarial litigation (Flamini et al. 2004; Goldsmith et al. 2014). This environment produces an atmosphere with strong incentives for plaintiff/trustee science advisors to maximize the magnitude and spatial extent of effects to the environment and to downplay uncertainties or the influence of potential other, non-compensable stressors and vice versa for those scientists retained to help defend against claims. Maintaining objectivity and advancing science in such a work environment would require extraordinary self-discipline by the individual scientists, an institutional environment emphasizing science credibility, and an openness to external, disinterested review (Wagner 2005; Boden and Ozonoff 2008; Elliott 2014).

In at least some jurisdictions, monies from NRDARs must go to restoring the damaged public natural resources (beyond paying for salaries, consulting fees, and expenses to support claims) and cannot be used to enrich those pursuing the cases. Toxic torts, by comparison, pursue damages on behalf of private individuals or groups who consider themselves to have been harmed by exposures to toxic chemicals. Toxic tort cases are adversarial proceedings with the lawyers expected to advocate only for their client, and expert witnesses are paid to present testimony to support just one side. These torts may be highly lucrative for the plaintiff attorneys who select the science testimony. For example, in the Vioxx litigation the share for plaintiff lawyers was about US$1.5 billion (32%) of the US$4.85 billion settlement (McClellan 2008), and in successful asbestos litigation the average share of payouts going to the victims was only 37% (Elliott 1988). The lures and risks of such immense payouts in toxic torts create strong incentives for biased science. At best, critical reviews or product defense studies conducted for toxic tort science should be regarded with skepticism.

Defense of science and engineering in favor of protecting enterprises that reflect large investments and years of devoted work is understandable but becomes dangerous when objectivity is compromised. Case studies such as the Vioxx case, in which the maker of the drug downplayed increased risks of mortality from a successful product in which they were deeply vested (Curfman et al. 2005; McClellan 2008) and the cross-claims of blame between the engineering consultants and the mine operator in the aftermath of the Mount Polley (British Columbia, Canada) mine tailing dam failure (Topf 2016; Amnesty 2017), remind us that objective science (including recognizing and disclosing uncertainty, and encouraging additional science to narrow that uncertainty) is good business.

Academic–Industry Collaborations

The role of industry funding and concerns of perceived conflicts of interest in academic-industry collaborations have been addressed in literature and are a common element in institutional research integrity policies (Resnik and Shamoo 2011; Elliott 2014). Often through philanthropic foundations, industry may contribute to basic science education and research to strengthen regional universities and further the science literacy of the potential workforce and society. Industry may also support applied ecotoxicology and other environmental science research to inform specific scientific questions that affect their business interests. When industrial and academic research interests become at least partially congruent, academic scientists may actively seek out such interest and support for their projects and graduate students. Pragmatically, academia–industry collaborations are necessary because public funding alone may be insufficient to support graduate research or to address important questions relevant to industry and society. In the United States, about 40% of national research and development is funded by the private sector (NAS 2017). In the United States, public funding for university research on the effects of chemicals in the environment has consistently declined since 2000 (Bernhardt et al. 2017; Burton et al. 2017), which implies that without industry–academia collaborations, there would be much less substantive university research. The need for sufficient funding to support training and research can trump concerns over the color of money, as captured in a university president’s quip, “the only problem with tainted research funding is there t’aint enough of it” (Krimsky 2003).

Benefits of collaboration run both ways, with expertise from academic and public sectors helping industry find solutions to lessen or avoid contributing to environmental problems (Hopkin 2006). In a recent tripartisan commentary promoting collaborative research among academia, business, and government in environmental toxicology and chemistry, Chapman et al. (2018) argued that collaborative connections across sectors provide scientific structural integrity and generally foster scientific integrity, transparency, and environmental relevance through balancing perspectives. Collaborations may enhance the broad acceptability of research findings, over those put forth from noncollaborative research. However, collaborative research is not without challenges. Foremost of these is a perception of bias, and known collaborators may be subject to ad hominem criticisms, ridicule, and marginalization. Strong attention to objectivity, transparency, and sometimes a thick skin are needed for successful collaborative research (Chapman et al. 2018). More generally, Edwards (2016) lays out several principles for successful, durable industry–academia collaborations, including establishing clear quality criteria and making them public, mandating data sharing, subjecting work to independent oversight before public release, and enshrining public ownership for all research outputs. Further, effective collaboration between industry and academic scientists requires industry to provide expertise as well as funds. Collaboration with industry scientists engenders a shared desire to succeed and creates a sense of ownership of a project (Edwards 2016). The interchange of science through academic, industry, and government scientists is deeply rooted in SETAC culture, and the favorable views of the authors toward working across sectors is undoubtedly influenced through our history with SETAC. However, industry support to academics or others in support of applied environmental questions may come with inherent conflicts of interest, and critics may consider scientists as collaborators in the pejorative sense of the word (Hopkin 2006). This setting requires vigilance from both industrial research sponsors and recipients to avoid unconscious bias.

While readers might presume situations in which individuals or institutions with strong incentives to influence research findings consistent with their financial interests will do so, it is important not to judge a study solely by its funder, nor to presume the sponsor’s preferred outcome. For example, an energy company sponsored a study to see if they could develop a scientific case for relief from costly requirements for meeting dissolved oxygen criteria in a river downstream of its hydroelectric dam. Instead the testing showed that the existing criteria could impair hatching salmon (Geist et al. 2006). The company scientists easily could have buried the results, which could have been discounted as being from novel techniques. Their path of least resistance would have been to leave the study in the file drawer, rather than going to the trouble of defending novel science and publishing it in the open literature. In the long view, a reputation of science credibility may be more valuable for companies than short-term project benefits.

Other examples include scientists from mining and metals trade groups publishing studies showing that existing USEPA criteria for Zn and other metals could be underprotective of aquatic species or entire communities (Brix et al. 2011; DeForest and Van Genderen 2012). Conversely, a university quantitative ecologist accepted support from an environmental advocacy group (through university channels) to model the potential population-level effects of elevated Se from mining on local native trout populations (Van Kirk and Hill 2007). Because the advocacy group had been a persistent opponent of the mining operations, officials from the influential mining company apparently presumed that the academics’ work would also be biased to favor the advocacy group’s positions, and they questioned the researchers’ probity (Blumenstyk 2007). In fact, the Se concentrations projected by these academics to cause detrimental population-level effects were higher than concentrations previously derived by industry-funded consultants who themselves had been on the receiving end of bias implications because they were industry funded (Skorupa et al. 2004; Van Kirk and Hill 2007). Unfortunately, these favorable collaboration examples are countered by examples in which studies were funded as part of deliberate strategies to shape the science to fit business interests. This “tobacco strategy” has been asserted with various substances such as asbestos, benzene, chromium, lead, vinyl chloride, and more (Krimsky 2003; Sass et al. 2005; Cranor 2008; Michaels 2008; Oreskes and Conway 2011; Anderson 2017).

In keeping with the adage to be careful judging a book by its cover or wine by its label, judging science by its funder or by presumed interests or leanings of the scientists can lead to mistaken and unfair perceptions. Brain et al. (2016) pointed out that the career path of environmental scientists is often ambiguous, and whether scientists end up in careers with industry, academia, or government has more to do with chance and timing of opportunities than with a particular desire to work in 1 sector or another. Such is often the case with academic and government scientists who work with industry to jointly fund or investigate a science question of mutual interest (Hopkin 2006). The convergence of scientific interests with financial interests can lead to a good marriage, so long as the parties are principled and forthright with each other. While there may be a perception that research contracts are highly restrictive, in our experiences these agreements establish expectations of academic freedoms. “Interested science” should be viewed with open-minded skepticism, and studies with immense financial implications warrant a higher level of scrutiny than others (van Kolfschooten 2002; Krumholz et al. 2007; Suter and Cormier 2015b). It does not necessarily follow that interested science is wrong or tainted. Ensuring transparency and complete data reporting is one tangible step researchers can take to improve credibility of and perceptions toward industry-academia collaborations.

A scientific society founded on the principles of balancing competing interests

Scientific societies have important roles in promoting scientific integrity and ethical conduct, such as establishing codes of ethics that include disclosure of conflicts of interest, being a focal point for developing and communicating discipline-specific standards to foster research integrity, and providing educational material (AAAS 2000; NAS 2017).

We think that SETAC is notable for its directed and sustained efforts to balance competing perspectives in its deliberative processes and other activities. The founding principles of SETAC set out a tripartisan structure with regulatory, industrial, and academic scientists (Bui et al. 2004; Menzie and Smith 2018). As a result, SETAC now has well-developed norms for balancing interests, inclusiveness of differing viewpoints, and neutrality in the reporting. These norms have enabled SETAC to be regarded as a source of consensus-based science with successful partnership or advisory roles in United Nations programs and conventions such as the United Nations Environment Programme’s (UNEP) Global Mercury Partnership, Stockholm Convention on persistent organic pollutants, UNEP-SETAC Life Cycle Initiative for reducing hazardous waste as well as informing national-level legislation (Mozur 2012; Augspurger 2014). In contrast to multisector, nonprofit organizations such as the Health and Environmental Science Institute (hesiglobal.org), which brings scientists together from academia, government, industry, and nongovernmental advocacy organizations (NGOs) to conduct original research in the public domain, SETAC is more a forum for dialogue and promotion of best practices within the discipline.

The intended balanced representation of industry, government, and academia is not always achievable, for there are also guidelines for gender equity and geographic representation, and of course, people have to be willing to volunteer. Further, the tripartisan emphasis underrepresents scientists from environmental advocacy groups or other NGOs. These groups are influential for shaping public debate, policy, and law on environmental issues, but their low participation in the Society suggests that they may not be attracted to or feel welcomed by a “hard” scientific society such as SETAC, and meeting costs may be a barrier. Despite these imperfections, the norms of seeking to balance potentially conflicting interests and to provide a safe forum to express differing scientific viewpoints are deeply ingrained in the Society’s culture and activities.

PROMOTING SCIENTIFIC INTEGRITY IN ECOTOXICOLOGY

While “scientific integrity” is ultimately a subjective judgment that cannot easily be reduced to review checklists, there are some general points to maintain in ecotoxicology and related science. These include relevance, rigor, reproducibility, objectivity, and transparency.

Environmental relevance

By definition, environmental chemistry and ecotoxicology are concerned with how chemicals, both natural and synthetic, pose a threat or influence the natural world (Johnson et al. 2017). Because of pragmatic and ethical constraints, research in this domain is often done in laboratory environments, testing cultured laboratory organisms or cell lines or other in vitro surrogates for organisms. However, the intent of such research invariably still has some intended relevance to conditions that occur in the environment. We have seen articles in ecotoxicology literature discussing some novel research based on undertested taxa, underappreciated endpoints, unexpected multiple stressor effects, or unanticipated indirect effects via untested commensal microbes. An article may start out with an introduction on the ecological importance of the novel work, the work is reported, and then the discussion closes arguing the ecological importance of their work, how it should change the thinking in the field, and management implications. Yet to obtain their desired experimental effects, exposure concentrations may have been orders of magnitude higher than those typical in the real world, or exposure routes, chemical forms, or dilution media may be unlike those that the organisms could encounter in nature (Johnson and Sumpter 2016; Mebane and Meyer 2016; Weltje and Sumpter 2017). When authors present such studies with a narrative on the ecological importance of their topic, this may be a form of misrepresentation.

Environmental relevance and regulatory relevance may not always be one and the same. Still with studies of potential ecological effects of chemicals, investigators often hope that their research will inform future regulatory assessments of risk and safety. In practice, environmental regulators may pass over results of academic studies in favor of research sponsored by industry. Steps academic ecotoxicologists could take to improve the utility of their research for informing policy include developing an understanding of environmental regulatory frameworks; using existing chemical assessments to inform new studies; conducting and reporting studies to include sufficient rigor, quality assurance, and detail to enable regulatory use; and placing academic studies in a regulatory context (Ågerstrand et al. 2017).

Rigor

Funders, journals, and institutions reward novelty, such as the short-lived discovery of a bacterium that grows with As instead of P (Alberts 2012). Highly selective journals with article acceptance rates of 10% or less preferentially publish findings that are sensational or at least surprising. These incentives are influential because universities and research institutes often hire and promote scientists on the basis of their record of acquiring grant money and on the number of publications times the journal impact factors of the journals published therein (Parker et al. 2016). With finite career opportunities and high network connectivity, the marginal return for being in the top tier of publications may be orders of magnitude higher than an otherwise respectable publication record (Smaldino and McElreath 2016). The editorial quest for novelty has led to publication of questionable articles in elite journals, such as one positing that caterpillars were the results of accidental sex between insects and worms (Borrell 2009). Top tier journals also tend to have higher retraction rates than midtier journals, suggesting that rigor has sometimes been compromised in the competition for paradigm-shifting results (Nature Editors 2014).

In ecotoxicology, Harris et al. (2014) describe 12 basic principles of sound ecotoxicology that should apply to most environmental toxicity studies. These principles range from carefully considering essential aspects of experimental design through to accurately defining the exposure, adequate replication, unbiased analysis and reporting of the results, and repeating experiments that yielded surprising or ambiguous responses. There are ample opportunities for improvement. For example, Harris and Sumpter (2015) asked a very basic question of a sample of studies published in 2013 in 3 leading ecotoxicological publications: Was the concentration of the test chemical actually measured? Of the studies reviewed from Environmental Toxicology and Chemistry, 20% failed this basic aspect of experimental credibility, as did 33% and 41% of ecotoxicology studies published in Aquatic Toxicology and Environmental Science and Technology, respectively (Harris and Sumpter 2015).

While Harris et al. (2014) emphasized laboratory-based studies, field-based environmental effects studies replace the challenges of the artificiality and questionable relevance of some laboratory-based toxicity testing, with different, messy, real world challenges. Closely related to the 12 principles described by Harris et al, we suggest 9 considerations that are important to most field-based ecotoxicological studies or environmental effects monitoring.

  1. Develop a thorough understanding of the issues to ask informed questions before embarking on a new study. The available literature should be thoroughly vetted to inform the need for experimentation or field studies.

  2. A thorough understanding of the questions being posed is an essential prerequisite for designing a robust, reliable, reproducible study. Incomplete understanding of the questions leads to vague and often misguided study that undermines the scientific process (Suter et al. 2002; Lindenmayer and Likens 2010).

  3. Identify and reliably measure sensitive indicators (Melvin et al. 2009).

  4. Careful attention to appropriate reference conditions is needed to avoid potential, actual effects being masked by variability or confounding factors introduced by differences between the reference and test site environments (Arciszewski and Munkittrick 2015; Mebane et al. 2015). For example, beaches on rocky headlands and protected bays will naturally have very different benthic communities that would confound comparisons of pollution effects. Similarly, flowing rivers and impounded reservoirs have very different communities and study designs that attempt to detect pollution effects on communities across such disparate habitats may have very low discriminatory power (Buys et al. 2015). By failing to account for natural variability, adverse pollutant effects could be obscured (Parker and Wiens 2005; Wiens and Parker 1995).

  5. Gradient sampling: Studying a number of locations that vary in the degree of the factor under investigation, such as chemical pollution, may improve the ability to accurately estimate a relationship between exposure to the environmental factor of interest and the effect of that factor, if such a relationship exists.

  6. Time and patience: Just as experimental exposures need to be of appropriate duration for effects of interest to be manifested, environmental monitoring needs to be maintained long enough to pick up true trends, if present, or to convincingly argue that trends are not present (Lindenmayer and Likens 2010).

  7. Effect sizes: Specific definitions of what effects are considered negligible or of concern are helpful (Power et al. 1995; Melvin et al. 2009; Munkittrick et al. 2009).

  8. Avoid power failures: Use a statistical approach appropriate to the question, considering statistical burden of proof issues. For instance, p > 0.05 in testing for trends or differences between locations does not by itself show the lack of trend or effects (Dixon and Pechmann 2005; Mudge et al. 2012).

  9. Transparency: Reporting sufficiently detailed methods and raw data for others to reproduce the analyses or to further examine the data using alternative analyses is a key attribute and common shortcoming of studies (Duke and Porter 2013; Schäfer et al. 2013; McNutt et al. 2016).

Reproducibility

Reproducibility is one indicator of reliable research. However, the inability of researchers to reproduce influential studies of others or their own has garnered enough attention to be called a “reproducibility crisis” (Baker 2016a; Henderson and Thomson 2017; Munafò et al. 2017). However, not all studies are easily reproduced. Environmental data are often messy and field studies are more often observational than experimental. Large-scale, ecologically realistic studies such as long-term, experimental lake studies are difficult to do even once, and hopefully no one wishes for mishaps such as tailings dam failures or oil spills to study (Wiens and Parker 1995; Schindler 1998; Parker and Wiens 2005). Such studies require a logical system for causal inference to separate cause and effect from serendipitous correlations (Norton et al. 2002; Suter et al. 2002).

Even rigorous laboratory studies may be difficult to replicate due to the highly variable nature of biological systems and unanticipated responses to unknown factors. Demands for reproducibility may favor industrial science over academic science. Industry often works within strict good laboratory practice (GLP) rules and with well-studied species tested through standardized protocols (Elliott 2016). Academic science is often framed around education, and grants and graduate student researchers are usually encouraged to go after something new and novel; protocols may be developed as they go, and quality control may be uneven (Baker 2016b). Obstacles to adopting formalized quality management systems such as GLP in small research settings may include costs, lack of resources, lack of mandate, independent cultures, and high turnover (Figure 3). Adherence to GLP does not therefore mean that a study is well designed. It only means that a study is well documented, that quality is assured, and that the study design protocol was expressly followed. But if the study design was not appropriately relevant and rigorous for the study questions, the well-documented and reproducible results will doubtfully be meaningful. Nevertheless, even if regulatory GLP compliance is not required, small academic research facilities can benefit from embracing core components of GLPs, such as defining responsibilities; maintenance and sanitation of common lab spaces, equipment, and materials; well-defined experimental protocols; quality control testing, including positive and negative controls; and data reviews, audits, and archiving (Moermond et al. 2016; Bornstein-Forst 2017).

Figure 3.

Figure 3

Large environmental chemistry and toxicology laboratories that use standard methods to produce results that may be submitted to regulatory agencies usually have a well-established quality management structure. Quality management in academic research laboratories focused on novel methods may be more ad hoc, especially if the research work force is dominated by transient scientists, such as students or those on short-term postgraduate appointments (Credit: S Harris, sciencecartoonsplus.com).

Better experimental protocols that are easier to follow are one tangible way to strive for better reproducibility and transferability of both novel and standard experimental methods (Figure 4). Multimedia experimental protocols could make it much easier to explain and to teach techniques than the conventional, densely worded, printed protocols. The Journal of Visualized Experiments (JoVE) is an innovative peer-reviewed, science methods journal in which the articles are a unique blend of the conventional printed article with professionally produced videography. Ecotoxicology methods articles have begun to be published in this format (van Iersel et al. 2014; Calfee et al. 2016). The field would benefit from broader use of new visualization techniques to document new methods and to improve education and training on techniques that need to be highly standardized to be repeatable. At the minimum, with the availability of electronic data repositories and supplemental information in journals, there is no reason why detailed methods, including video demonstrations, cannot be published.

Figure 4.

Figure 4

The brief methods descriptions in journal articles are seldom sufficient to be reproducible by others. Step-by-step video documentation of experimental protocols can be published as video articles, uploaded to online repositories, or published as supplemental information. Video protocols are underutilized in environmental toxicology (Credit: S Harris, sciencecartoonsplus.com).

Reproducing a statistical summary or model run reported in a scientific publication when the underlying data and code are provided and explained is one thing. Reproducing an actual complex experiment is hard and is rarely attempted, unless perhaps the results are novel and have a high regulatory or societal impact. Even under the best of circumstances, such as when the original investigators have the resources to and are motivated to repeat an experiment in the same lab, with organisms from the same culture, using as close to identical methods as they could manage, and including positive controls, the investigators may be unable to produce the same result twice (Mebane et al. 2008; Owen et al. 2010). Positive controls (testing a substance with well-characterized effects) are not always used in toxicity testing programs, but their routine use can help investigators understand variability in test results (Glass 2018). Nosek and Errington (2017) caution that if investigator #2 reports that the results of study #1 could not be reproduced, that by itself does not indicate which is more credible: result #1, #2, neither, or both. Further, much of the “reproducibility” debate in the natural sciences is focused on cell biology or human behavior (psychology) experiments, which may be more tractable to reproducibility studies than messy environmental observational or experimental studies. Especially with complex biological testing such as multigeneration tests, a green thumb husbandry factor may bring together art and science to environmental chemistry and toxicology (Figure 4). Subtle methods differences, strain differences, or stochastic events can be so puzzling that investigators are left thinking demons must have snuck into their study and interfered with one treatment but not others (Hurlbert 1984). (We presume that Hurlbert’s [1984] suggestions for exorcisms or human sacrifice for troubleshooting suspected demonic intrusions, might run afoul of some contemporary institutional review board policies.)

Still, reproducibility is a core tenet of science, and successful reproduction adds confidence in the credibility of novel findings. Divergent but individually credible results may further advance the science by illuminating important aspects that were missed in the initial study (Owen et al. 2010). If for instance, an investigator were to find a novel, major adverse effect of a class of chemicals to a previously untested taxonomic group, then other equally diligent and skilled investigators should be able produce similar effects in other research settings, even if the test conditions were only similar. For instance, a stand-alone paper from the 1970s that reported a snail’s anomalous sensitivity to Pb was skeptically regarded. More than 30 y later, open-minded skepticism led to follow-on studies from a new generation of scientists that not only affirmed the anomalous early report of sensitivity but also led to important advances in comparative physiology and underlying mechanisms of toxicity (Brix et al. 2012). Similarly, early reports that freshwater mussels and other mollusks were unusually sensitive to ammonia were not widely persuasive. After repeated studies across multiple laboratories and species showed similar findings, the issue gained traction with standardized method development, interlaboratory round-robin testing, and attention by environmental managers (Farris and Hassel 2006; USEPA 2013).

Individual investigators may not always have the opportunities for self-replication, but best practices call for repeating what one can (Harris and Sumpter 2015). In field studies, multiple measures of exposure, multiple years of field data, and so on give credence to findings. We recognize that all science has practical resource limits and we are not going so far as to argue that novel findings from small studies should never be published. Rather, the appropriate conclusion from such studies is along the lines of “if these findings turn out to be repeatable, they could be an important development.” In our view, novel, major findings that are supported only by a one-off study are best regarded as tentative.

Transparency

Transparency in reporting research, including all the relevant underlying data that were relied upon in the paper, has become a critical element of integrity in science. Science’s claim to self-correction and overall reliability is based on the ability of researchers to replicate the results of published studies (Nosek et al. 2015). Studies cannot be replicated or even reconstructed if scientists will not share additional data, information, or materials from published studies, and we believe that upholding such ethical norms is every scientist’s responsibility. The embrace of the principle of transparent reporting has been uneven across disciplines, and the field of ecotoxicology has certainly not distinguished itself as a leader in this regard (Meyer and Francisco 2013; Schäfer et al. 2013; Womack 2015; McNutt et al. 2016; Parker et al. 2016).

Researchers in ecotoxicology and environmental chemistry have long presented only highly reduced data summaries. The only “data” included in some publications are crowded figures and tables with results of statistical outputs, such as F-values, effects concentration point estimates (EC50, EC10, etc.), or no- and lowest-observed effects concentrations (NOECs, LOECs). These derived values are not data. Such data-poor publications were understandable before the early 2000s, in which strict page limits and word limits precluded authors “wasting” space to publishing data tables. With the provisions for electronic supplemental material beginning in the 2000s, and dedicated data repositories becoming widely available at low or no cost to authors in the 2010s, these reasons for opaque publication are no longer justified. Researchers who choose not to transparently report the actual data underlying their scientific findings may have other reasons for doing so. They may be concerned about others scooping them on their own data (McNutt 2016), although counterintuitively, publishing data may actually help establish priority and reduce scooping concerns (Laine 2017). Other less charitable reasons why researchers might resist publishing data include that they have not devoted the needed time to organize their data in a coherent fashion that is interpretable by others, their reported results might not be able to be reconstructed from the underlying data, they are not keen to facilitate alternate statistical analyses or interpretations of their data, they wish to publish unfalsifiable findings, or there is simply less there than they led readers to believe (Smith and Roberts 2016).

Data sharing may still be regarded more as an imposition from science funders to be complied with rather than as a universal principle embraced by those who conduct and publish scientific research (Nelson 2009; Burwell et al. 2013; Holdren 2013; Nosek et al. 2015; European Commission 2016; Collins and Verdier 2017). There are many pragmatic obstacles to effective data sharing, such as the expertise, extra work, and costs to researchers to organize, serve, and preserve their data in a comprehensible manner; privacy and anonymity concerns for environmental data collected from private property or about human subjects; and balancing intellectual property concerns. Some environmental science research is intended to be confidential, such as private sector economic geology, agricultural chemical product development, and innumerable other corporate research efforts that are intended to develop products and recoup research and development investments. However, in our view, researchers on such ventures cannot have it both ways, by publishing some outcomes in the peer reviewed literature but withholding the supporting data as private. A recent corporate initiative to make available traditionally protected crop safety information is noteworthy in this regard (Bayer 2018).

Most environmental science journals have policies that encourage and facilitate data sharing. SETAC journals are probably typical in requiring a statement by the authors about whether and how the data underlying their analyses are available, with an admonition that authors should share upon request. A passable statement may be something as feeble as “Contact the Corresponding Author for data availability.”

The strongest data disclosure policy for journals publishing in the environmental sciences is probably that developed for the Public Library of Science (PLOS) family of journals. “PLoS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception” (PLOS 2014). Exceptions are limited to privacy or vulnerability concerns such as data on human research subjects that could not be fully anonymized; locations of archeological, fossil, or endangered species that could be exploited or damaged; or safety and security considerations. Penalties for authors who fail to comply include rejection, or if they decline to provide data for an already published article, the editors could flag their article with a cautionary correction or even retract it (PLOS 2014). Whether PLoS’s stand requiring authors to make available all data underlying their findings will lead other journals to stiffen their resolve, or whether the comparatively lax policies of competing journals will undermine PLoS and other open-science advocates remains to be seen (Nosek et al. 2015; Davis 2016).

Implicit to such requirements is the assumption that common understandings of what constitutes “raw data” will be contextual. For instance in toxicity tests, usually the counts or measurements at each time interval and all associated chemical and physical measurements are considered “raw data.” Metrics of species counts, average treatment concentrations, average of replicates and such are not raw data; they are derived values. Some “data points” such as a streamflow measurement or a chemical concentration in a medium are actually derived values, and the true raw data behind a data point include survey data, unprocessed sensor readings, spectral outputs, and such. Unless the study involves methods comparisons or forensic data audits, usually the researcher just wants the resultant derived values at a level of detail sufficient to reconstruct and further analyze the original results.

While the notion that investigators should preserve and share underlying data is simple, the reality of doing so is much more complicated and challenging. To us, it is a priority to strongly encourage, for without data, the credibility of science cannot be evaluated. Some research has shown that the willingness and ability of authors to share data decline significantly with time, and having a weak data availability policy is only marginally better than having no policy at all (Vines et al. 2014). Computer servers get replaced, directories flushed, offices moved, files dumped, and investigators move on, retire, and eventually die.

Rather than mandates, one simple incentive to improving openness in reporting has been for journals to award prominent open data “badges” for articles verified as being supported by available, correct, usable, and complete data. By showing an open data badge on the issue table of contents, article web page, and including a “verified open data” statement in the bibliographic indexing metadata, articles without such badge endorsement may be seen as incomplete. Over time, this might shift the norm toward open preservation and sharing. In at least one journal, this approach appeared to markedly improve the sharing and preservation of data through linked, independent repositories (Kidwell et al. 2016; Munafò et al. 2017).

Critical reviews and literature syntheses

In ecotoxicology, published literature can be roughly broken down into 2 categories: original research and the review article. The original research article usually is based upon field observations, laboratory experiments, modeling, or blended approaches. Generalizing original articles through reviews and syntheses is a critical part of the ecotoxicology and most environmental science literature. Critical reviews, risk assessments, and environmental quality standards are based on syntheses of the literature, and not on individual studies. Synthesis articles have rather distinct scientific integrity problems from the original research article. Decisions must be made on how studies were located and results categorized, and a host of data reduction, data standardization, and analysis decisions need to be made. These decisions and associated biases may be deliberate and clearly explained or the analyst may not even recognize that they have made a decision (Roberts et al. 2006; Suter and Cormier 2015a). In some cases we suspect analysts obscured their decisions. Systematic review methodology is now also being used for some chemical assessments in which case data synthesis may be highly structured, with criteria clearly defined for data inclusion and search strategies (Hobbs et al. 2005; Van Der Kraak et al. 2014; Whaley et al. 2016). Other situations may follow the wending path of the present article: discussions among the authors (“Have you seen so-and-so?”) and readings that led to other relevant material through forward and backward citing, along with some specific subject searches. This path led to much relevant and thoughtful material across many disciplines. But it was hardly systematic or reproducible.

Literature searches from different sources can yield very different results. For example, using a 2007 original research article on population modeling of Se toxicity to trout (Van Kirk and Hill 2007), 4 leading bibliographic indexing services were searched for articles citing that study. Web of Science (WoS), Elsevier’s Scopus, Digital Science’s Dimensions, and Google Scholar found 7, 10, 15, and 22 citing publications respectively. Scopus found all articles found by WoS, plus articles WoS missed in Human and Ecological Risk Assessment and Integrated Environmental Assessment and Management. Google Scholar found all articles found by Scopus and WoS, plus articles in Ecotoxicology Modeling, Water Resources Research, 3 government reports, 2 books, a thesis, a conference proceeding, a duplicate, and 2 ambiguous citations. It follows from this 3-fold difference in valid citations that a critical review of published literature on a topic or a regulatory assessment could miss relevant science if the assessors relied too heavily on a single search provider.

This simple example was from the “current era” of science, which began by 1996 or so, depending on which bibliographic indexing service scholars are using. Websites for WoS and Scopus respectively report their indexing databases are reliable from 1971 and 1996 forward. Relying exclusively on bibliographic index searching may omit important, relevant older research.

Thus, we have the indexing bias problem in metaanalyses and assessment (those that are not indexed will not be retrieved), and the related problem of reviewing the secondary source but citing the original. We have seen assessments that omitted seminal research published before the current digital era, which may reflect indexing bias. Ecotoxicology syntheses often rely on variations of species-sensitivity distributions, which may provide more explanations of statistical characteristics of the data sets, data extrapolations, transformations, and normalizations, than on where the data came from in the first place. We have seen micrograms and milligrams mixed up, and rankings that mistakenly commingled endpoints such as time to death in hours with effects concentrations. Some of these issues are undoubtedly related to the online availability of well-curated databases such as the European Center for Ecotoxicology and Toxicology of Chemicals ECETOC Aquatic Toxicity (EAT) Database or the US Environmental Protection Agency’s EcoTox databases. These compiled databases are valuable resources but reliance on secondary compilations deprives the original authors of credit via citations. At least for publicly funded science, citations may be a way that authors demonstrate the value of their work to the scientific community, and thus build the case for further funding. Further, reliance on secondary sources is a good way to introduce or repeat inaccuracies (Rekdal 2014). We echo previous calls for better training and rigor when conducting and reporting secondary analyses of ecotoxicology and related literature. Practices from other fields, such as the Cochrane systematic review approach and guidelines for the ethical reuse of data could be adapted to the ecotoxicology practice (Roberts et al. 2006; Duke and Porter 2013; Suter and Cormier 2015a).

Objectivity

Beyond the topics discussed, some explicit steps to ensure objectivity bear consideration from initial planning through the interpretation of results. Even very careful scientists are not immune from cognitive biases that affect objectivity. These include hypothesis dependency (alternatives never considered will not be evaluated), hypothesis tenacity (sticking with an initial hypothesis despite evidence to the contrary), and anchoring, in which beliefs are excessively influenced by initial perceptions (Norton et al. 2003).

There are some steps scientists could consciously take to avoid these pitfalls. Strive to be self-skeptical of possible personal predilections toward expected causative factors or outcomes. Consider the opposite—maybe it is correct. For topics with competing schools of thought, consider how a scientist from a different school might interpret the data, or how might one from a different sector analyze the problem (Suter and Cormier 2015a)? If considerations such as these are sincere and not done just to anticipate and refute criticisms, they can help to ensure the objectivity and balance of interpretations.

Environmental chemistry

We discuss environmental chemistry separately because it has different scientific integrity challenges than does the biological side of ecotoxicology. Unlike the situation in the biological side of ecotoxicology where serious questions have been aired about the reproducibility of some of the published research (Scott 2012, 2018; Sumpter et al. 2014; Sumpter et al. 2016), analytical environmental chemistry does not appear to suffer from such problems to the same extent. The likely reason for this is that quality assurance mechanisms are routinely incorporated into analytical projects that involve the measurement of environmental concentrations of chemicals, thus ensuring that the results are accurate. These include the use of high-quality standards, which are widely available, the use of high-specification instruments, and general guidelines proposed by national and international institutions. That combination enables recovery rates to be determined, preferably at different concentration ranges, for intra- and interday precision to be assessed, detection limits to be quantified, and matrix effects (interference from other substances) to be investigated. These quality assurance procedures are adopted routinely, are always checked by reviewers of analytical papers, and ensure quality is maintained.

Unlike method development, however, the reporting and interpretation of environmental chemistry has common pitfalls, particularly in analyses from large data sets or compiled databases, and in citation practices. For example, metadata specifying fundamental details may be missing or misunderstood, such as whether concentrations of metals or other elements in water are from filtered or unfiltered samples or whether they reflect the total mass of the element or only 1 speciation state (Sprague et al. 2017). Aquatic metals concentrations declined from milligram-per-liter levels in reports from the 1980s to microgram-per-liter or submicrogram-per-liter levels by the late 1990s. This remarkable, widespread decline was not due to better pollution controls or global geochemical change, but to improved recognition and control of ubiquitous contamination in field and laboratory sampling and analysis methods. There are ongoing debates over the most appropriate sampling and analysis methods for inorganic water quality constituents, particularly for environments that are expensive and difficult to sample representatively, such as large rivers (Horowitz 2013). Such sampling biases and analytical method differences may be substantial enough to confound analyses.

Organic environmental chemistry data sets have similar pitfalls that can confound subsequent reviews and secondary analyses. For example, Kolpin et al. (2002a) published a summary of a survey of different pharmaceuticals, hormones, and other organic contaminants from 139 streams. This highly influential paper showed that some organic contaminants were widespread in streams and contributed to heightened concern and research interest in their potential health and environmental risks. The paper is presently the most highly cited paper ever published in the journal Environmental Science & Technology (1st of 46 011 papers published, with 5104 citations in the Scopus database as of 16 August 2018). However, reported concentrations of at least 1 of the 95 chemicals reported, 17α-ethinyl estradiol (EE2), were questioned because the median and maximum concentrations of 73 and 831 ng/L, respectively, were about 10 to 1000 times higher than those from other surveys or analyses (Ericson et al. 2002; Hannah et al. 2009). Kolpin et al. (2002b) responded that upon further inspection, they had discovered that their maximum reported EE2 value of 831 ng/L was indeed incorrect owing to analytical interferences. They further explained that they had defined “median” in a peculiar way, as the median of detected values in streams, not in its usual meaning as a central tendency of all values. Because EE2 was undetectable in 94% of streams sampled, the median of detected values was skewed far above the median of all streams (<5 ng/L). However, despite the discovery of the mistakes, no correction was issued for the original publication. The Kolpin et al. (2002b) acknowledgment of the mistaken values was buried among the other 5103 citing papers and their subtle, peculiar definition of a “median” was likely overlooked by most readers. As of August 2018, at least 50 citing papers were identified that re-reported and perpetuated the exorbitant and mistaken 831 ng/L maximum EE2 value for US streams.

Thus, it is easy for authors to misinterpret or to perpetuate erroneous relevant values from the literature. The problem of citing unreliable maximum values would be avoided if authors simply cited extreme statistics, such as percentile concentrations (e.g., the 10th to 90th, 5th to 95th, 1st to 99th) instead of ranges (Weltje and Sumpter 2017). Whereas a single extreme value defines the range, extreme percentiles are more representative of severe conditions that organisms may actually encounter and will be more stable and are far less vulnerable to be mistaken. For instance, Santore et al. (2018, their Figure 9) elegantly summarized about 29 000 paired reports from aggregated data sources of dissolved and total aluminum (Al) in fresh water. Logically, the dissolved fraction of trace metals in water can be no greater than the total. In practice, results do not always come out that way, especially when the 2 values are close. Factors such as differences in sample digestion, differences between instruments, or slight differences in technique may introduce subtle analytical biases that produce the dissolved fraction greater than the whole (Paul et al. 2016). In the Santore et al. (2018) comparison, at least 150 of the 29 000 Al pairs show the dissolved fraction is greater than the whole. While such logically impossible values should usually simply indicate that close to 100% of the total Al was present in dissolved form, some are obviously impossible values with the dissolved fraction 2 to 3 orders of magnitude higher than the total concentration. Should an imprudent analyst uncritically report on ranges of dissolved and total Al, they would report nonsense results. Simply backing off to the 99th or 95th percentile for a large data set such as this one would still reflect the extremes of environmental conditions but be somewhat insulated from dubious, single values.

The counterpart to avoiding pitfalls working with big data is avoiding pitfalls when working with small data sets. Small data sets may be the best that can be acquired from studies in extreme environments where sampling is difficult, dangerous, or very expensive or where conditions are ephemeral or rare. Dismissing high-quality data that may have been collected under heroic or unrepeatable circumstances on pedantic statistical grounds would be foolhardy. Small data sets can be important, so long as one is cautious about inferences. Efforts to “improve” small data sets through sophisticated statistical techniques should be resisted (Murtaugh 2007).

ADVOCACY

Science is the enterprise for answering questions and making predictions about the how the universe works. Science can inform issues, but science can never answer “should” questions. For example, science cannot tell societies whether they should restrict chemical uses and releases, whether natural preserves should be set aside from human exploitation, or whether biodiversity should be protected. These are among the myriad value judgments that societies must make, and while science can support societies in making these choices through predictions founded upon a body of knowledge, there are never “scientifically correct” answers to questions of human values, morals, and ethics (Snyder and Hooper-Bui 2018). Scientists are humans, and like all people, they hold ethical and moral values that drive assumptions which may not be explicitly stated, if even recognized. For example, the notion of “environmental protection” in the environmental toxicology field is rooted in societal norms, statutes, and international agreements with goals of minimizing harm (a human concept) from activities such as extraction, manufacture, use, and disposal of chemical products. Scientists in the field develop informed opinions toward the “should” questions relating to their experiences, which leads to questions of whether and how scientists advocate for “should” questions.

The underpinnings of science are that researchers have no vested interest in the results of their observations, that they objectively record and analyze these results, and that they fairly report the outcomes in the peer-reviewed literature. Advocacy can compromise these underpinnings, at the cost of scientists’ credibility (Fenn and Milton 1997). Scientists tend to be passionate about their science, which has led to controversy over the role that scientists should play in related public policy debates. While we think most scientists would agree that advocacy for science having a role in environmental policy debates is appropriate, there is likely much less agreement about whether it is appropriate for scientists to advocate for particular outcomes in policy debates. If the policy debate turns on questions of science central to a scientist’s particular area of study, probably no one is better positioned than that scientist to lay out the evidence for or against a particular course of action. If the scientist is regarded as a neutral and informed voice, their advice may be valued by all sides in a policy dispute (Sedlak 2016). However, if the scientist’s experience or analyses leads them to the strong conviction that one policy direction is more correct and should be adopted, then they are no longer a neutral broker and have become an advocate.

When questions of science are central to adversarial adjudicated proceedings, the protagonists controlling the proceedings are often lawyers. The lawyers are expected to advocate for their client’s interest, not for objective science. The lawyers retain consulting scientists as expert witnesses to support their side of the case. The lawyers will presumably seek out scientists whose research findings and views will increase their chance of winning. In the close, intense working environment of a team preparing for a complex, science-based legal strategy, it is easy for scientists to get caught up in the enthusiasm of a “team spirit” with a loss of detached impartiality and objectivity. Scientists who begin to function as “hired guns” focused on team wins are no longer scientists but advocates (Christensen and Klauda 1988).

Policy advocacy is potentially problematic because it may compromise use of research findings in policy and management deliberations if the information is not viewed as credible by all sides (Scott et al. 2007). In some situations, advocacy is beyond reproach, such as a university scientist who uncovered a Pb-poisoned community water system. Simply reporting the findings to the responsible officials likely would have been ineffective, if the ineptitude or indifference of those same responsible officials contributed to the situation in the first place (Sedlak 2016). However, not all situations are so clear cut, and reasonable people who share similar motivations and skills and who agree that researchers should do the right thing may not agree on what that is. Deliberations on major environmental issues are complex and science may only be one element of the deliberations. Developing and providing technical and scientific information to inform policy deliberations in an objective and relevant way is a formidable challenge (Nelson and Vucetich 2009; Meyer et al. 2010).

Institutional constraints aside, how scientists balance these competing issues and choose when or whether to engage in advocacy is a deeply personal choice and is situational (Meyer et al. 2010; Sedlak 2016). However, just as science journals discourage comingling original research results and commentary, scientists should keep science and advocacy distinct in their publications and speaking. In particular, we argue that scientists should be watchful for stealth policy advocacy. Stealth advocacy is the use of value-laden language in scientific writing that assumes a policy preference (Lackey 2007; Pielke 2007). Rather than openly disclosing assumed values or policy preferences, biases may be unconsciously (or deliberately) cloaked through normative science. Normative science is science developed, presented, or interpreted on the basis of an assumed, usually unstated, preference for a particular policy or class of policy choices. This covert advocacy may be reflected in word choices, and such advocacy is not always apparent even to the advocate. For instance, value-laden words such as “stressors,” “impacted,” “degraded,” “improved,” “good,” and “poor” may be used to describe habitats or other environmental features. Less value-laden words would be “factors,” “exposed,” “altered,” “changed,” “increased,” or “decreased.” The use of normative science is potentially insidious because the tacit, usually unstated, preference for a particular policy or class of policy choices is not perceptibly normative to policy makers or even to many scientists (Lackey 2007).

Criticisms of normative science can be excessive because, taken literally, the entire discipline of conservation biology could be considered too normative. Similarly, the mission statement of SETAC “to support the development of principles and practices for protection, enhancement and management of sustainable environmental quality and ecosystem integrity” could be too much for some. Science is normative, with topics and study questions influenced by normative treaties and laws. Areas of study or techniques once considered appropriate areas of science inquiry such as craniometry, eugenics, or experimentation on human subjects without informed consent are no longer considered to be within the norms of ethical science. Within environmental toxicology, pressure to reduce the use of animal testing might be an example of normative science.

Our point is not to argue for or against scientists engaging in overt policy advocacy, which is a personal decision, but for clarity and transparency. Just as original results, opinion, judgments and speculation should not be blended in a scientific paper, science and advocacy need some separation (Scott and Rachlow 2011). Covert advocacy is a form of bias. Environmental scientists should clearly differentiate between research findings and policy advocacy based upon those findings.

WEAPONIZING SCIENTIFIC INTEGRITY AND TRANSPARENCY

We recognize that “scientific integrity” discussions can easily be diminished to going down the path carved by “sound science” strategic initiatives, which often boiled down to campaigns to call “my science good science and your science junk” (McGarity 2003; Doremus 2007; Oreskes and Conway 2011; Kapustka 2016). The goal may be to recast policy, ideological, or economic disputes as scientific doubts or conflicts. In countries with a tort-based, adversarial legal system for resolving injuries or damages, science-based information becomes just another tool for dueling experts, who often have primary responsibility for advocating for the interests of their client (Wagner 2005). Research integrity policies or requirements for data transparency can be used as weapons to bury public university or government scientists with vexatious, intrusive, and costly demands for records such as raw laboratory notebooks, instrument calibration records, emails between coauthors, working drafts, and peer comments and responses. Such demands can be effective tools for interfering with the work of public-sector scientists, including academics in public institutions (Folta 2015; Halpern and Mann 2015; Kloor 2015; Kollipara 2015; Lewandowsky and Bishop 2016) or academics in private institutions who receive research support from public sources (Hey and Chalmers 2010; Shrader-Frechette 2012). For example, Deborah Swackhamer, an environmental chemist at the University of Minnesota, USA, was targeted under state open records laws with legal demands for raw unpublished data, class notes, purchase records, telephone records, and more from a 15-y period. Ironically, the identities of those seeking the information were themselves shielded from disclosure (Halpern 2015). Some scientists have learned to use transparency laws against their peers in the highly competitive arena of grant funding. Through freedom of information demands for competing grant proposals, scientists have been able to obtain details on competitors’ new research direction, preliminary results, and cost structure. For those targeted scientists, such information gathering may be seen as research espionage under the rubric of transparency (Carey and Woodward 2017).

The sunshine laws enacted in many jurisdictions were intended to illuminate the business of government officials; it is doubtful they were intended by their crafters to sweep up university professors. Nevertheless, some see scientists as fair targets of such tactics, given that inspections of their erstwhile private communications have uncovered peer-review misconduct, undisclosed conflicts of interest, or bias (e.g., Russell et al. 2010; Fellner 2018; Krimsky and Gillam 2018). Privately funded research is generally shielded from such practices (Wagner and Michaels 2004; Brain et al. 2016). However, researchers at private institutions may also be subject to baseless litigation to intimidate scientists and deter others by inflicting long and costly legal processes, disruption, and threats of personal financial liability. Such harassing lawsuits have been employed often enough to get a name, “SLAPP suits,” for Strategic Litigation Against Public Participation (Johnson 2007; Nature Medicine Editors 2017; Robbins 2017). Although legal, such strategies represent detrimental practices cloaked in the vernacular of transparent science (Wagner and Michaels 2004; Johnson 2007; McGarity and Wagner 2008; Levy and Johns 2016).

EDUCATION FOR A CULTURE OF SCIENCE INTEGRITY

It is one thing to realize that there is a problem, but quite another to find effective solutions to that problem. For high scientific integrity to be the norm, the culture of science has to systematically embrace exemplary practices and discourage bad behavior (Benderly 2010). However sometimes the system and its incentives are part of the problem. Many of the reliability concerns and detrimental behaviors we have discussed are related to the perverse incentives under which scientists may operate. These incentives are grants and publications. In the case of grants, the more, and bigger, they are, the better, as far as institutions are concerned. In the case of publications, the number of these seems to be much more important than their quality. This is probably because assessing the quality of a scientific article is not easy; there is no established, widely accepted way of doing this. The “status” of the journal in which an article is published, which is most often taken to be the impact factor of that journal, also is considered an important consideration. Hence scientists strive to get their papers published in journals with high impact factors and may act unethically to do so (Nature Editors 2014). These incentives, particularly those concerning publications (“the more the merrier”), probably contribute to many of the lapses in integrity in ecotoxicology. This problem became particularly severe in China with its high volume of scientific output and its system of tying substantial cash bonuses or other overt rewards for researchers to the impact factors of journals in which their articles were published. This contributed to ingenious and widespread misconduct (Hvistendahl 2013). In response, China recently initiated sweeping reforms with strong disincentives for academic misconduct: Institutions will no longer investigate themselves, funding will be withheld from institutions at which misconduct occurred, researchers will be deterred from publishing findings in journals that are deemed to be of poor academic quality and set up merely for profit, and scientists found to have conducted willful misconduct will face severe penalties (Cyranoski 2018; Nature Editors 2018a). How the education for and enforcement of these initiatives develops is likely to influence research communities and funders elsewhere. Likewise in ecotoxicology, moving to more ethical practices in which integrity is central to any endeavor will not be easy to accomplish, nor will it be achieved quickly.

Education of ecotoxicologists, both young and old, is a key way forward toward a better culture of integrity in our discipline. That education can be delivered in a variety of ways, the two most obvious and practical being 1) the publication of articles in journals in which integrity and ethics are discussed and 2) courses run by scientific societies such as SETAC. The present article is an example of a very direct attempt at highlighting integrity issues in our field, with the hope that by making ecotoxicologists aware of these unethical practices they will change their behavior and act more ethically. Other, less direct attempts have involved the publication of papers covering suggestions for how to improve the quality of ecotoxicology research, from the planning stage (Harris et al. 2014) to the publication stage (e.g., Moermond et al. 2016; Hanson et al. 2017). However, it seems unlikely that published papers alone will have a significant influence on the quality of ecotoxicology research because few scientists will be aware of them, and even fewer will read them carefully and subsequently act on the advice in them.

Although there has been some public discussion about what training and skills the ecotoxicologists of the future will require (Harris et al. 2017), this crucial aspect of producing better ecotoxicologists, capable of doing better, and hence more useful, research has rarely been addressed. Yet there are undoubtedly things that could be done to better educate the ecotoxicologists of the future. A radical proposal would be to require aspiring ecotoxicologists to pass examinations before they are allowed to practice ecotoxicology, either as researchers or regulators. Many professions do insist that its practitioners pass examinations before they are allowed to practice: Doctors, dentists, accountants, and lawyers are obvious examples. This strategy ensures that practitioners are adequately trained. As a first step toward the goal of ensuring all ecotoxicologists are appropriately trained, specific courses could be introduced and attendance could become mandatory. Courses on topics such as experimental design, statistical analysis, data presentation, and how to write a scientific paper could be designed easily. In fact, many research organizations and some industries already run “in house” courses on these topics, which may include short, memorable messages such as David Glass’s series of animations on common pitfalls in experimental design (Glass 2018). Munafò et al.’s (2017) “manifesto for reproducible science” is a framework that could readily be adapted to a course on integrity in ecotoxicology research. In fact, as the issue of integrity (or, more accurately, the lack of it) has gained in prominence in the last few years, some organizations have responded by running training courses for their young scientists on integrity and ethical behavior in research. SETAC could offer such training courses and does so to a limited extent already. Another possibility would be for consultancy companies to develop and run these training courses for clients, who could be universities, research organizations, or industrial companies. Consultancy companies that specialized exclusively in providing training could be established; this has happened already in many other professions.

In summary, identifying that there are problems with the way ecotoxicologists are trained currently about integrity issues in their discipline is only the first step. Better education of ecotoxicologists (both those starting their careers and those already well established) is needed. Such education will need to be provided in a range of formats, to maximize its chances of succeeding. The environment cannot be protected by poor-quality ecotoxicology.

PROMOTING SCIENTIFIC INTEGRITY IN ENVIRONMENTAL TOXICOLOGY

Scientific integrity is harnessed by high-quality environmental research characterized by rigor, relevance, reproducibility, and objectivity. Our review suggested several conclusions, tangible actions, and less tangible directions that professional societies such as SETAC could take to encourage scientists, their supporting institutions, and science journals to maintain and improve a culture of science integrity. Scientific integrity is reinforced through full transparency, exemplified by full disclosures of potential conflicting and competing interests that could contribute to bias, and by making all data and observations readily accessible, as specified in the following list.

  1. Scientific integrity in ecotoxicology and the environmental sciences cannot be ensured by impeccable policies or checklists. It is an attitude to be embraced, maintained, and enforced through the support, guidance, and approval of one’s peers through a community of practices.

  2. Reliability, rigor, relevance, and reproducibility of science are more important than novel advances, if those advances neglect these “four Rs.”

  3. Increased attention to a culture of quality management training and transparency could improve the confidence in published findings.

  4. Studies that are not supported by primary data released through data repositories or detailed supporting information are not fully credible.

  5. Journal publishers and editors could strongly encourage the complete presentation of supporting data, with prominent labeling on the journal and article front matter indicating whether data are available. They should caution authors at the outset that the inability to produce data upon request could be cause for retraction.

  6. One practical step investigators can take toward improving reproducibility of experiments would be to produce detailed video illustration of their methods.

  7. As a community, scientists should be aware of and disclose potential conflicting or competing interests that could contribute to, or be perceived as, bias and should not tolerate extreme conflicts or bias.

  8. Judging science by its funder should be discouraged; rather, open-minded skepticism is applicable when the funder has a stake in the outcome of a study.

  9. Scientists, like all people, have moral and ethical assumptions, based upon their values. These should not be intermixed with their interpretations and reporting of science. If scientists’ values lead them to cross the lines from analysis to advocacy, they need to be particularly careful about distinguishing between science, values, assumptions, and opinion.

  10. Professional societies such as SETAC have an important role in fostering respectful evidence-based dialogue, in meetings and correspondence on published works.

  11. Professional societies such as SETAC could support a standing training seminar on principles of scientific integrity, the transparent conduct of science, and best practices for peer review in conjunction with its annual meetings.

  12. Professional societies such as SETAC have a valuable role in facilitating balanced, expert reviews of controversial science topics, such as has been done with their Pellston Workshops® series and resulting publications.

Supplementary Material

Sup1

Acknowledgment

We thank the 4 anonymous peer reviewers and the 2 institutional reviewers for their constructive criticisms and insights. This commentary and perspectives followed discussions by the Society of Environmental Toxicology and Chemistry (SETAC) ad hoc subcommittee on Scientific Integrity (Fairbrother 2016). Author roles: CA Mebane wrote most of the paper, with contributions by JP Sumpter and A Fairbrother. All other authors participated in discussions, reviewed and edited the drafts, supported the recommendations, and approved of its publication.

Disclaimer

The views expressed in this article are those of the authors and do not necessarily represent the views or policies of SETAC, the US Environmental Protection Agency, or the US Fish and Wildlife Service. Affiliations are given for identification, but do not necessarily imply endorsement. Author affiliations reflect a variety of academic, business, and governmental employment, and all authors have competing interests from our previous experiences and work environments, but hopefully these were somewhat balanced out. No specific funding was provided for the writing of this manuscript and no financial conflicts of interest were declared by any author.

Footnotes

Data Accessibility

No data sets are associated with this policy analysis.

SUPPLEMENTAL DATA

The supplemental data file includes the review comments, responses, and original manuscript.

Contributor Information

Christopher A Mebane, US Geological Survey, Boise, Idaho.

John P Sumpter, Brunel University London, United Kingdom.

Anne Fairbrother, Exponent, Bellevue, Washington, USA.

Thomas P Augspurger, US Fish and Wildlife Service, Raleigh, North Carolina.

Timothy J Canfield, US Environmental Protection Agency, Ada, Oklahoma.

William L Goodfellow, Exponent, Alexandria, Virginia, USA.

Patrick D Guiney, University of Wisconsin, Madison, Wisconsin, USA.

Anne LeHuray, Chemical Management Associates, Alexandria, Virginia, USA.

Lorraine Maltby, University of Sheffield, Sheffield, United Kingdom.

David B Mayfield, Gradient, Seattle, Washington, USA.

Michael J McLaughlin, University of Adelaide, Adelaide, South Australia, Australia.

Lisa S Ortego, Bayer CropScience, Research Triangle Park, North Carolina, USA.

Tamar Schlekat, Society of Environmental Toxicology and Chemistry, Pensacola, Florida, USA.

Richard P Scroggins, Environment and Climate Change Canada, Ottawa, Ontario, Canada.

Tim A Verslycke, Gradient, Cambridge, Massachusetts, USA.

REFERENCES

  1. [AAAS] American Association for the Advancement of Science. 2000. The role and activities of scientific societies in promoting research integrity. In: A report of a conference; 2000 Apr 10–11; Washington, DC 24 p. https://www.aaas.org/sites/default/files/content_files/The%20Role%20and%20Activities%20of%20Scientific%20Societies%20in%20Promoting%20Research%20Integrity.pdf. [Google Scholar]
  2. Ågerstrand M, Sobek A, Lilja K, Linderoth M, Wendt-Rasch L, Wernersson AS, Rudén C. 2017. An academic researcher’s guide to increased impact on regulatory assessment of chemicals. Environ Sci: Processes Impacts 19(5): 644–655. [DOI] [PubMed] [Google Scholar]
  3. Alberts B 2012. Editor’s note [A bacterium that can grow using arsenic instead of phosphorus]. Science 334(6034): 1149. [Google Scholar]
  4. [Amnesty] Amnesty International. 2017. Mount Polley litigation summary. Ottawa (ON) [accessed 2018 Dec 1]. https://www.amnesty.ca/sites/amnesty/files/Mount%20Polley%20summary%20of%20litigations.pdf [Google Scholar]
  5. Anderson K 2017. August 15 Trust falls — Are we in a new phase of corporate research? Scholarly Kitchen [blog]. Oakbrook Terrace (IL): Society for Scholarly Publishing; https://scholarlykitchen.sspnet.org/2017/08/15/trust-falls-new-phase-corporate-research/ [Google Scholar]
  6. [ARC] Australian Research Council. 2007. Australian code for the responsible conduct of research. http://www.arc.gov.au/research-integrity
  7. Arciszewski TJ, Munkittrick KR. 2015. Development of an adaptive monitoring framework for long-term programs: An example using indicators of fish health. Integr Environ Assess Manag 11(4): 701–718. 10.1002/ieam.1636 [DOI] [PubMed] [Google Scholar]
  8. Atkinson RL, Macdonald I. 2010. White hat bias: The need for authors to have the spin stop with them. Int J Obes 34(1): 83–83. 10.1038/ijo.2009.269 [DOI] [PubMed] [Google Scholar]
  9. Augspurger T 2014. November 6 SETAC hosts successful TSCA [Toxic Substances Control Act] risk assessment science seminar for congressional staff. SETAC Globe 15(11). https://globearchive.setac.org/2014/november/tsca-risk-assessment.html [Google Scholar]
  10. Baba A, Cook DM, McGarity TO, Bero LA. 2005. Legislating “sound science”: The role of the tobacco industry. Am J Public Health 95(S1): S20–S27. 10.2105/AJPH2004.050963 [DOI] [PubMed] [Google Scholar]
  11. Baethge C 2013. The effect of a conflict of interest disclosure form using closed questions on the number of positive conflicts of interest declared − A controlled study. PeerJ 1:e128 10.7717/peerj.128 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Baker M 2016a. May 25 [corrected 2016 Jul 28]. 1,500 scientists lift the lid on reproducibility. Nature (News). 10.1038/533452a. [DOI] [PubMed] [Google Scholar]
  13. Baker M 2016b. How quality control could save your science. Nature 529: 456–458. 10.1038/529456a [DOI] [PubMed] [Google Scholar]
  14. Bayer. 2018. Transparency in crop science. Bayer AG. [accessed 2018 Sep 26]. https://cropscience-transparency.bayer.com [Google Scholar]
  15. Benderly BL. 2010. November 5 Ain’t misbehavin’. Science (Careers). 10.1126/science.caredit.a1000107 [DOI] [Google Scholar]
  16. Benderly BL. 2014. February 12 Defending oneself from “product defense.” Science (Careers). 10.1126/science.caredit.a1400037 [DOI] [Google Scholar]
  17. Bernhardt ES, Rosi EJ, Gessner MO. 2017. Synthetic chemicals as agents of global change. Front Ecol Environ 15(2): 84–90. 10.1002/fee.1450 [DOI] [Google Scholar]
  18. Bero L, Anglemyer A, Vesterinen H, Krauth D. 2016. The relationship between study sponsorship, risks of bias, and research outcomes in atrazine exposure studies conducted in non-human animals: Systematic review and meta-analysis. Environ Int 92–93: 597–604. 10.1016/j.envint.2015.10.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Bes-Rastrollo M, Schulze MB, Ruiz-Canela M, Martinez-Gonzalez MA. 2014. Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: A systematic review of systematic reviews. PLoS Med 10(12): e1001578 10.1371/journal.pmed.1001578 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Blumenstyk G 2007. When research criticizes an industry. Chron High Educ 54(4): A21. [Google Scholar]
  21. Boden LI, Ozonoff D. 2008. Litigation-generated science: Why should we care? Environ Health Perspect 116(1): 117–122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Boehm PD, Ginn TC. 2013. The science of natural resource damage assessments. Environ Claims J 25(3): 185–225. 10.1080/10406026.2013.785910 [DOI] [Google Scholar]
  23. Bohannon J 2013. Who’s afraid of peer review? Science 342(6154): 60–65. 10.1126/science.342.6154.60 [DOI] [PubMed] [Google Scholar]
  24. Boone MD, Bishop CA, Boswell LA, Brodman RD, Burger J, Davidson C, Gochfeld M, Hoverman JT, Neuman-Lee LA, Relyea RA et al. 2014. Pesticide regulation amid the influence of industry. BioScience 64(10): 917–922. 10.1093/biosci/biu138 [DOI] [Google Scholar]
  25. Bornstein-Forst SM. 2017. Establishing good laboratory practice at small colleges and universities. J Microbiol Biol Educ 18(1): 18.1.10. 10.1128/jmbe.v18i1.1222 [DOI] [Google Scholar]
  26. Borrell B 2009. August 24 National Academy as National Enquirer? PNAS publishes theory that caterpillars originated from interspecies sex. Scientific American (The Sciences). https://www.scientificamerican.com/article/national-academy-as-national-enquirer/
  27. Brain R, Stavely J, Ortego L. 2016. In response: Resolving the perception of bias in a discipline founded on objectivity—A perspective from industry. Environ Toxicol Chem 35(5): 1070–1072. 10.1002/etc.3357 [DOI] [PubMed] [Google Scholar]
  28. Brix KV, DeForest DK, Adams WJ. 2011. The sensitivity of aquatic insects to divalent metals: A comparative analysis of laboratory and field data. Sci Total Environ 409(20): 4187–4197. 10.1016/j.scitotenv.2011.06.061 [DOI] [PubMed] [Google Scholar]
  29. Brix KV, Esbaugh AJ, Munley KM, Grosell M. 2012. Investigations into the mechanism of lead toxicity to the freshwater pulmonate snail, Lymnaea stagnalis. Aquat Toxicol 106–107(0): 147–156. 10.1016/j.aquatox.2011.11.007 [DOI] [PubMed] [Google Scholar]
  30. Bui C, Parrish R, Meredith M, Giddings J, editors. 2004. A history of SETAC Part 1: The beginning. Pensacola (FL): SETAC; 36 p. [Google Scholar]
  31. Burton GA, Di Giulio R, Costello D, Rohr JR. 2017. Slipping through the cracks: Why is the U.S. Environmental Protection Agency not funding extramural research on chemicals in our environment? Environ Sci Technol 51(2): 755–756. 10.1021/acs.est.6b05877 [DOI] [PubMed] [Google Scholar]
  32. Burton GA, Ward CH. 2012. Editorial preface [letters to the editor on the Exxon Valdez oil spill and pink salmon]. Environ Toxicol Chem 31(3): 469 10.1002/etc.174121954042 [DOI] [Google Scholar]
  33. Burwell SM, VanRoekel S, Park T, Mancini DJ. 2013. Open data policy-managing information as an asset. Washington (DC): US Office of Management and Budget; M-13-13 [accessed 2018 Sep 15]. https://obamawhitehouse.archives.gov/sites/default/files/omb/memoranda/2013/m-13-13.pdf [Google Scholar]
  34. Buys DJ, Stojak AR, Stiteler W, Baker TF. 2015. Ecological risk assessment for residual coal fly ash at Watts Bar Reservoir, Tennessee: Limited alteration of riverine-reservoir benthic invertebrate community following dredging of ash-contaminated sediment. Integr Environ Assess Manag 11(1): 43–55. 10.1002/ieam.1577 [DOI] [PubMed] [Google Scholar]
  35. Cain DM, Loewenstein G, Moore DA. 2005. The dirt on coming clean: Perverse effects of disclosing conflicts of interest. J Legal Studies 34(1): 1–25. 10.1086/426699 [DOI] [Google Scholar]
  36. Calfee RD, Puglis HJ, Little EE, Brumbaugh WG, Mebane CA. 2016. Quantifying fish swimming behavior in response to acute exposure of aqueous copper using computer assisted video and digital image analysis. J Visualized Exp (108): e53477 10.3791/53477 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Callaway E 2015. August 18 Faked peer reviews prompt 64 retractions. Nature (News).0 10.1038/nature.2015.18202 [DOI] [Google Scholar]
  38. Campbell T, Friesen J. 2015. March 3 Why people “fly from facts”: Research shows the appeal of untestable beliefs and how they lead to a polarized society. Scientific American (Behavior & Society). https://www.scientificamerican.com/article/why-people-fly-from-facts/
  39. Carey TL, Woodward A. 2017. September 2 These scientists got to see their competitors’ research through public records requests. BuzzFeed News (Science). https://www.buzzfeednews.com/article/teresalcarey/when-scientists-foia
  40. Chapman PM, Brain RA, Belden JB, Forbes VE, Mebane CA, Hoke RA, Ankley GT, Solomon KR. 2018. Collaborative research among academia, business, and government. Integr Environ Assess Manag 14(1): 152–154. 10.1002/ieam.1975 [DOI] [PubMed] [Google Scholar]
  41. Christensen SW, Klauda RJ. 1988. Two scientists in the courtroom: What they didn’t teach us in graduate school. Monogr Am Fish Soc 4: 307–315. [Google Scholar]
  42. Collins SL, Verdier JM. 2017. The coming era of open data. BioScience 67(3): 191–192. 10.1093/biosci/bix023 [DOI] [Google Scholar]
  43. Cope MB, Allison DB. 2009. White hat bias: Examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. Int J Obes 34(1): 84–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Couzin-Frankel J 2017. February 22 Firing of veteran NIH scientist prompts protests over publication ban. Science (News). 10.1126/science.aal0808 [DOI] [Google Scholar]
  45. Cranor CF. 2008. The tobacco strategy entrenched. Science 321(5894): 1296–1297. 10.1126/science.1162339 [DOI] [Google Scholar]
  46. Curfman GD, Morrissey S, Drazen JM. 2005. Expression of concern: Bombardier et al, “Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis,” N Engl J Med 2000;343:1520–8. N Engl J Med 353(26):2813–2814. 10.1056/NEJMe058314 [DOI] [PubMed] [Google Scholar]
  47. Cyranoski D 2018. China introduces sweeping reforms to crack down on academic misconduct. Nature 558(14 June 2018): 171 10.1038/d41586-018-05359-8 [DOI] [PubMed] [Google Scholar]
  48. Davis P 2016. August 23 Scientific Reports on track to become largest journal in the world Scholarly Kitchen [blog]. Oakbrook Terrace (IL): Society for Scholarly Publishing; https://scholarlykitchen.sspnet.org/2016/08/23/scientific-reports-on-track-to-become-largest-journal-in-the-world/ [Google Scholar]
  49. DeForest DK, Van Genderen EJ. 2012. Application of U.S. EPA guidelines in a bioavailability-based assessment of ambient water quality criteria for zinc in freshwater. Environ Toxicol Chem 31(6): 1264–1272. 10.1002/etc.1810 [DOI] [PubMed] [Google Scholar]
  50. Descamps H 2008. Natural resource damage assessment (NRDA) under the European Directive on Environmental Liability: A comparative legal point of view. Océanis 32(3/4): 439–46. [Google Scholar]
  51. Dixon PM, Pechmann JHK. 2005. A statistical test to show negligible trend. Ecology 86(7): 1751–1756. 10.1890/04-1343 [DOI] [Google Scholar]
  52. Doremus H 2007. Scientific and political integrity in environmental policy. Texas Law J 86: 1601–1653. [Google Scholar]
  53. Douglas H 2015. Politics and science: Untangling values, ideologies, and reasons. Ann Am Acad Pol Soc Sci 658: 296–306. 10.1177/0002716214557237 [DOI] [Google Scholar]
  54. Douglas HE. 2014. Scientific integrity in a politicized world. In: Schroeder-Heister P, Heinzmann G, Hodges W, Bour PE, editors. Logic, methodology and philosophy of science. Proceedings of the Fourteenth International Congress Logic, Methodology and Philosophy of Science; 2011 Jul 19–26; Nancy, France. London (UK): College Publ. p 254–268. [Google Scholar]
  55. Duke CS, Porter JH. 2013. The ethics of data sharing and reuse in biology. BioScience 63(6): 483–489. 10.1525/bio.2013.63.6.10 [DOI] [Google Scholar]
  56. Edwards A 2016. Reproducibility: Team up with industry. Nature 531(301): 299–301. 10.1038/531299a [DOI] [PubMed] [Google Scholar]
  57. Edwards MA, Roy S. 2016. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environ Eng Sci 34(1): 51–61. 10.1089/ees.2016.0223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Elliott ED. 1988. The future of toxic torts: Of chemophobia, risk as a compensable injury and hybrid compensation systems. Houston Law Rev 25: 781–799. [Google Scholar]
  59. Elliott KC. 2014. Financial conflicts of interest and criteria for research credibility. Erkenntnis 79(5): 917–937. 10.1007/s10670-013-9536-2 [DOI] [Google Scholar]
  60. Elliott KC. 2016. Standardized study designs, value judgments, and financial conflicts of interest in research. Perspect Sci 24(5): 529–551. 10.1162/POSC_a_00222 [DOI] [Google Scholar]
  61. Else H 2018. Does science have a bullying problem? Nature 563: 616–618. [DOI] [PubMed] [Google Scholar]
  62. Enserink M 2014. Sabotaged scientist sues Yale and her lab chief. Science 343(6175): 1065–1066. 10.1126/science.343.6175.1065 [DOI] [PubMed] [Google Scholar]
  63. Enserink M 2017. March 21 A groundbreaking study on the dangers of “microplastics” may be unraveling. Science (News). 10.1126/science.aal0939 [DOI] [Google Scholar]
  64. Ericson JF, Laenge R, Sullivan DE. 2002. Comment on “Pharmaceuticals, hormones, and other organic wastewater contaminants in U.S. streams, 1999−2000: A national reconnaissance.” Environ Sci Technol 36(18): 4005–4006. 10.1021/es0200903 [DOI] [PubMed] [Google Scholar]
  65. European Commission. 2016. October 28 Open access to scientific information | Digital Single Market; Brussels (BE: ). https://ec.europa.eu/digital-single-market/en/open-access-scientific-information [Google Scholar]
  66. Extance A 2018. November 5 Chemistry graduate student admits poisoning colleague with carcinogen. Chemistry World (News). https://www.chemistryworld.com/news/chemistry-graduate-student-admits-poisoning-co-worker/3009717.article
  67. Fairbrother A 2016. February 18 New subcommittee on scientific integrity. SETAC Globe 17(2). https://globearchive.setac.org/2016/february/integrity.html [Google Scholar]
  68. Fairbrother A, Bennett RS. 1999. Ecological risk assessment and the precautionary principle. Hum Ecol Risk Assess 5(5): 943–949. 10.1080/10807039991289220 [DOI] [Google Scholar]
  69. Fanelli D 2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 4(5): e5738 10.1371/journal.pone.0005738 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Fanelli D 2012. Negative results are disappearing from most disciplines and countries. Scientometrics 90(3): 891–904. 10.1007/s11192-011-0494-7 [DOI] [Google Scholar]
  71. Farris JL, Hassel JHV, editors. 2006. Freshwater bivalve ecotoxicology. Boca Raton (FL): SETAC and CRC Pr. 408 p. [Google Scholar]
  72. Fellner C 2018. June 18 Toxic secrets: Professor “bragged about burying bad science” on 3M chemicals. Sydney Morning Herald (Investigation). https://www.smh.com.au/lifestyle/health-and-wellness/toxic-secrets-professor-bragged-about-burying-bad-science-on-3m-chemicals-20180615-p4zlsc.html
  73. Fenn DB, Milton NM. 1997. Advocacy and the scientist. Fisheries 22(11): 4. [Google Scholar]
  74. Flamini F, Hanley N, Shaw WD. 2004. Settlement versus litigation in environmental damage cases. Working paper. Glasgow (UK): Business School - Economics, Univ Glasgow; http://EconPapers.repec.org/RePEc:gla:glaewp:2003_4 [Google Scholar]
  75. Folta KM. 2015. August 16 Transparency weaponized against scientists. Science 20 http://www.science20.com/kevin_folta/transparency_weaponized_against_scientists-156873 [Google Scholar]
  76. Fraser H, Parker TH, Nakagawa S, Barnett A, Fidler F. 2018. July 16 Questionable research practices in ecology and evolution. PLoS One 13(7): e0200303 10.1371/journal.pone.0200303 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Gardner M 1989. Water with memory? The dilution affair. Skeptical Inquirer 13: 132–141. [Google Scholar]
  78. Geist DR, Abernethy CS, Hand KD, Cullinan VI, Chandler JA, Groves PA. 2006. Survival, development, and growth of fall Chinook salmon embryos, alevin, and fry exposed to a variable thermal and dissolved oxygen regime. Trans Am Fish Soc 135(6): 1462–1477. 10.1577/T05-294.1 [DOI] [Google Scholar]
  79. Ghorayshi A 2016. June 29 “He thinks he’s untouchable”-Sexual harassment case exposes renowned ebola scientist. BuzzFeedNews (Science). https://www.buzzfeed.com/azeenghorayshi/michael-katze-investigation.
  80. Gibbons A 2014. Sexual harassment is common in scientific fieldwork (July 16, 2014). Science https://www.sciencemag.org/news/2014/07/sexual-harassment-common-scientific-fieldwork
  81. Glanz J, Armendariz A. 2017. March 8 Years of ethics charges, but star cancer researcher gets a pass. New York Times (Science). https://www.nytimes.com/2017/03/08/science/cancer-carlo-croce.html
  82. Glass D 2018. Experimental design for biologists: 1. System validation. [video]. [accessed 2018 Nov 11]. https://www.youtube.com/watch?v=qK9fXYDs--8. 4:06 minutes.
  83. Gleick PH, Adams RM, Amasino RM, Anders E, Anderson DJ, Anderson WW, Anselin LE, Arroyo MK, Asfaw B, Ayala FJ et al. 2010. Climate change and the integrity of science. Science 328(5979): 689–690. 10.1126/science.328.5979.689 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Goldsmith BJ, Waikem TK, Franey T. 2014. Environmental damage liability regimes concerning oil spills - A global review and comparison. Int Oil Spill Conf Procs 2014(1): 2172–2192. 10.7901/2169-3358-2014.1.2172 [DOI] [Google Scholar]
  85. Goodstein D 1995. Conduct and misconduct in science. Ann N Y Acad Sci 775(1): 31–38. 10.1111/j.1749-6632.1996.tb23124.x [DOI] [Google Scholar]
  86. Gustavson KE, Barnthouse LW, Brierley CL, Clark EH II, Ward CH. 2007. Superfund and mining megasites. Environ Sci Technol 41(8): 2667–2672. 10.1021/es0725091 [DOI] [PubMed] [Google Scholar]
  87. Halpern M 2015. Freedom to bully: How laws intended to free information are used to harass researchers (2015). Cambridge (MA): Union of Concerned Scientists; 20 p. https://www.ucsusa.org/center-science-and-democracy/protecting-scientists-harassment/freedom-bully-how-laws [Google Scholar]
  88. Halpern M, Mann M. 2015. Transparency versus harassment. Science 348(6234): 479–479. 10.1126/science.aac4245 [DOI] [PubMed] [Google Scholar]
  89. Hannah R, D’Aco VJ, Anderson PD, Buzby ME, Caldwell DJ, Cunningham VL, Ericson JF, Johnson AC, Parke NJ, Samuelian JH et al. 2009. Exposure assessment of 17α-ethinylestradiol in surface waters of the United States and Europe. Environ Toxicol Chem 28(12): 2725–2732. 10.1897/08-622.1 [DOI] [PubMed] [Google Scholar]
  90. Hanson ML, Wolff BA, Green JW, Kivi M, Panter GH, Warne MSJ, Ågerstrand M, Sumpter JP. 2017. How we can make ecotoxicology more valuable to environmental protection. Sci Total Environ 578: 228–235. 10.1016/j.scitotenv.2016.07.160 [DOI] [PubMed] [Google Scholar]
  91. Harris CA, Scott AP, Johnson AC, Panter GH, Sheahan D, Roberts M, Sumpter JP. 2014. Principles of sound ecotoxicology. Environ Sci Technol 48(6): 3100–3111. 10.1021/es4047507 [DOI] [PubMed] [Google Scholar]
  92. Harris CA, Sumpter JP. 2015. Could the quality of published ecotoxicological research be better? Environ Sci Technol 49(16): 9495–9496. 10.1021/acs.est.5b01465 [DOI] [PubMed] [Google Scholar]
  93. Harris D, Patrick M. 2011. June 21 Is “big food’s” big money influencing the science of nutrition? New York (NY): ABC News; http://abcnews.go.com/US/big-food-money-accused-influencing-science/story?id=13845186 [Google Scholar]
  94. Harris MJ, Huggett DB, Staveley JP, Sumpter JP. 2017. What training and skills will the ecotoxicologists of the future require? Integr Environ Assess Manag 13(4): 580–584. 10.1002/ieam.1877 [DOI] [PubMed] [Google Scholar]
  95. Hayes TB. 2004. There is no denying this: Defusing the confusion about atrazine. BioScience 54(12): 1138–1149. 10.1641/0006-3568(2004)054[1138:TINDTD]2.0.CO;2 [DOI] [Google Scholar]
  96. Henderson D, Thomson K. 2017. January 19 Why should scientific results be reproducible? [video]. Boston (MA): WGBH, NOVA; http://www.pbs.org/wgbh/nova/next/body/reproducibility-explainer/.15min. [Google Scholar]
  97. Hey E, Chalmers I. 2010. Mis-investigating alleged research misconduct can cause widespread, unpredictable damage. J R Soc Med 103(4): 133–138. 10.1258/jrsm.2010.09k045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Hobbs DA, Warne MSJ, andMarkich SJ. 2005. Evaluation of criteria used to assess the quality of aquatic toxicity data. Integr Environ Assess Manag 1(3): 174–180. 10.1897/2004-003R.1 [DOI] [PubMed] [Google Scholar]
  99. Holdren JP. 2013. February 22 Increasing access to the results of federally funded scientific research. Memorandum for the heads of executive departments and agencies. Washington (DC): Executive Office of the President, Office of Science and Technology Policy; 6 p. [Google Scholar]
  100. Hopkin M 2006. Ecology: Caught between shores. Nature 440(7081): 144–145. 10.1038/440144a [DOI] [PubMed] [Google Scholar]
  101. Horowitz AJ. 2013. A review of selected inorganic surface water quality-monitoring practices: Are we really measuring what we think, and if so, are we doing it right? Environ Sci Technol 47(6): 2471–2486. 10.1021/es304058q [DOI] [PubMed] [Google Scholar]
  102. Huff D, Geis I. 1954. How to lie with statistics. New York (NY): WW Norton; (1993 reissue). 142 p. [Google Scholar]
  103. Hurlbert SH. 1984. Pseudoreplication and the design of ecological field experiments. Ecol Monogr 54(2): 187–211. 10.2307/1942661 [DOI] [Google Scholar]
  104. Hutchings JA. 1997. Is scientific inquiry incompatible with government information control? Can J Fish Aquat Sci 54(5): 1198–1210. 10.1139/f97-051 [DOI] [Google Scholar]
  105. Hvistendahl M 2013. China’s publication bazaar. Science 342(6162): 1035–1039. 10.1126/science.342.6162.1035 [DOI] [PubMed] [Google Scholar]
  106. [ICMJE] International Committee of Medical Journal Editors. 2016. ICMJE form for disclosure of potential conflicts of interest. 3 p. http://icmje.org/conflicts-of-interest/
  107. Ioannidis JPA. 2005. Why most published research findings are false. PLoS Med 2(8): e124 10.1371/journal.pmed.0020124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Johnson AC, Donnachie RL, Sumpter JP, Jürgens MD, Moeckel C, Pereira MG. 2017. An alternative approach to risk rank chemicals on the threat they pose to the aquatic environment. Sci Total Environ 599–600: 1372–1381. 10.1016/j.scitotenv.2017.05.039 [DOI] [PubMed] [Google Scholar]
  109. Johnson AC, Sumpter JP. 2016. Are we going about chemical risk assessment for the aquatic environment the wrong way? Environ Toxicol Chem 35(7): 1609–1616. 10.1002/etc.3441 [DOI] [PubMed] [Google Scholar]
  110. Johnson BL. 2007. Intimidation of scientists: A HERA experience. Hum Ecol Risk Assess 13(3): 475–477. 10.1080/10807030701340870 [DOI] [Google Scholar]
  111. Johnson DR, Ecklund EH. 2016. Ethical ambiguity in science. Sci Eng Ethics 22(4): 989–1005. 10.1007/s11948-015-9682-9 [DOI] [PubMed] [Google Scholar]
  112. Kapustka L 2016. Words matter. Integr Environ Assess Manag 12(3): 592–593. 10.1002/ieam.1767 [DOI] [PubMed] [Google Scholar]
  113. Keith R. 13th retraction issued for. Jesús Ángel Lemus. Retraction Watch. 2015 Oct 9; http://retractionwatch.com/2015/10/09/13th-retraction-issued-for-jesus-angel-lemus/ [Google Scholar]
  114. Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S, Falkenberg L-S, Kennett C, Slowik A, Sonnleitner C, Hess-Holden C et al. 2016. Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biol 14(5): e1002456 10.1371/journal.pbio.1002456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Kintisch E 2010. August 19 “I told ya, you can’t stop the rage,” UC endocrinologist Hayes writes to Syngenta. Science (News). http://news.sciencemag.org/2010/08/i-told-ya-you-cant-stop-rage-uc-endocrinologist-hayes-writes-syngenta [Google Scholar]
  116. Kloor K 2015. February 11 Agricultural researchers rattled by demands for documents from group opposed to GM foods. Science (News) 10.1126/science.aaa7846 [DOI] [PubMed] [Google Scholar]
  117. Kollipara P 2015. February 13 Open records laws becoming vehicle for harassing academic researchers, report warns. Science (News). 10.1126/science.aaa7856 [DOI] [Google Scholar]
  118. Kolpin DW, Furlong ET, Meyer MT, Thurman EM, Zaugg SD, Barber LB, Buxton HT. 2002a. Pharmaceuticals, hormones, and other organic wastewater contaminants in U.S. streams, 1999-2000: A national reconnaissance. Environ Sci Technol 36(6): 1202–1211. 10.1021/es011055j [DOI] [PubMed] [Google Scholar]
  119. Kolpin DW, Furlong ET, Meyer MT, Thurman EM, Zaugg SD, Barber LB, Buxton HT. 2002b. Response to comment on “Pharmaceuticals, hormones, and other organic wastewater contaminants in U.S. streams, 1999−2000:  A national reconnaissance.” Environ Sci Technol 36(18): 4007–4008. 10.1021/es020136s [DOI] [PubMed] [Google Scholar]
  120. Krimsky S 2003. Science in the private interest: Has the lure of profits corrupted biomedical research? Lanham (MD): Rowman and Littlefield; 265 p. [Google Scholar]
  121. Krimsky S 2005. The funding effect in science and its implications for the judiciary. J Law Policy 13(1): 43–68. [Google Scholar]
  122. Krimsky S 2007. When conflict-of-interest is a factor in scientific misconduct. Med Law 26(3): 447–463. [PubMed] [Google Scholar]
  123. Krimsky S 2013. Do financial conflicts of interest bias research?: An inquiry into the “funding effect” hypothesis. Sci Technol Hum Values 38(4): 566–587. 10.1177/0162243912456271 [DOI] [Google Scholar]
  124. Krimsky S, Gillam C. 2018. Roundup litigation discovery documents: Implications for public health and journal ethics. J Public Health Policy 39(3): 318–326. 10.1057/s41271-018-0134-z [DOI] [PubMed] [Google Scholar]
  125. Krumholz HM, Ross JS, Presler AH, Egilman DS. 2007. What have we learnt from Vioxx? BMJ 334(7585): 120–123. 10.1136/bmj.39024.487720.68 [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Lackey RT. 2001. Values, policy, and ecosystem health. BioScience 51(6): 437–443. 10.1641/0006-3568(2001)051[0437:VPAEH]2.0.CO;2 [DOI] [Google Scholar]
  127. Lackey RT. 2007. Science, scientists, and policy advocacy. Conserv Biol 21(1): 12–17. 10.1111/j.1523-1739.2006.00639.x [DOI] [PubMed] [Google Scholar]
  128. Laine H 2017. Afraid of scooping; Case study on researcher strategies against fear of scooping in the context of open science. Data Sci J 16: 29 10.5334/dsj-2017-029 [DOI] [Google Scholar]
  129. Levy KE, Johns DM. 2016. When open data is a Trojan Horse: The weaponization of transparency in science and governance. Big Data & Society 3(1). 10.1177/2053951715621568 [DOI] [Google Scholar]
  130. Lewandowsky S, Bishop D. 2016. Research integrity: Don’t let transparency damage science. Nature 529: 459–461. 10.1038/529459a [DOI] [PubMed] [Google Scholar]
  131. Lexchin J, Bero LA, Djulbegovic B, Clark O. 2003. Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. BMJ 326(7400): 1167–1170. 10.1136/bmj.326.7400.1167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Lindenmayer DB, Likens GE. 2010. The science and application of ecological monitoring. Biol Conserv 143(6): 1317–1328. 10.1016/j.biocon.2010.02.013 [DOI] [Google Scholar]
  133. Macleod M 2014. Some salt with your statin, Professor? PLoS Biol 12(1): e1001768 10.1371/journal.pbio.1001768 [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Mandrioli D, Kearns CE, Bero LA. 2016. Relationship between research outcomes and risk of bias, study sponsorship, and author financial conflicts of interest in reviews of the effects of artificially sweetened beverages on weight outcomes: A systematic review of reviews. PLoS One 11(9): e0162198 10.1371/journal.pone.0162198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Marcus A, Oransky I. 2016. January 21 CRISPR controversy reveals how badly journals handle conflicts of interest. Boston (MA): STAT (The Watchdogs) https://www.statnews.com/2016/01/21/crispr-conflicts-of-interest/ [Google Scholar]
  136. Marshall E 1983. The murky world of toxicity testing. Science 220(4602): 1130–1132. 10.1126/science.6857237 [DOI] [PubMed] [Google Scholar]
  137. Martinson BC, Anderson MS, de Vries R. 2005. Scientists behaving badly. Nature 435(7043): 737–738. 10.1038/435737a [DOI] [PubMed] [Google Scholar]
  138. Marty GD, Saksida SM, Quinn TJ. 2010. Relationship of farm salmon, sea lice, and wild salmon populations. Proc Natl Acad Sci 107(52): 22599–22604. 10.1073/pnas.1009573108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. McClellan F 2008. The Vioxx litigation: A critical look at trial tactics, the tort system, and the roles of lawyers in mass tort litigation. DePaul Law Rev 57: 509–538. [Google Scholar]
  140. McClellan RO. 2018. Expression of concern. Crit Rev Toxicol 1 10.1080/10408444.2018.1522786 [DOI] [Google Scholar]
  141. McGarity TO. 2003. Our science is sound science and their science is junk science: Science-based strategies for avoiding accountability and responsibility for risk-producing products and activities. Kansas Law Rev 897–937. [Google Scholar]
  142. McGarity TO, Wagner WE. 2008. Bending science: How special interests corrupt public health research. Cambridge (MA): Harvard Univ Pr. 400 p. [Google Scholar]
  143. McKinley J 2016. September 8 G.E. spent years cleaning up the Hudson. Was it enough? New York Times. https://www.nytimes.com/2016/09/09/nyregion/general-electric-pcbs-hudson-river.html
  144. McNutt M 2016. #IAmAResearchParasite. Science 351(6277): 1005–1005. 10.1126/science.aaf4701 [DOI] [PubMed] [Google Scholar]
  145. McNutt M, Lehnert K, Hanson B, Nosek BA, Ellison AM, King JL. 2016. Liberating field science samples and data. Science 351(6277): 1024–1026. 10.1126/science.aad7048 [DOI] [PubMed] [Google Scholar]
  146. Mebane CA, Eakins RJ, Fraser BG, Adams WJ. 2015. Recovery of a mining-damaged stream ecosystem. Elementa 3(1): 000042 10.12952/journal.elementa.000042 [DOI] [Google Scholar]
  147. Mebane CA, Hennessy DP, Dillon FS. 2008. Developing acute-to-chronic toxicity ratios for lead, cadmium, and zinc using rainbow trout, a mayfly, and a midge. Water Air Soil Pollut 188(1–4): 41–66. 10.1007/s11270-007-9524-8 [DOI] [Google Scholar]
  148. Mebane CA, Meyer JS. 2016. Environmental toxicology without chemistry and publications without discourse: Linked impediments to better science. Environ Toxicol Chem 35(6): 1335–1336. 10.1002/etc.3418 [DOI] [PubMed] [Google Scholar]
  149. Melvin SD, Munkittrick KR, Bosker T, MacLatchy DL. 2009. Detectable effect size and bioassay power of mummichog (Fundulus heteroclitus) and fathead minnow (Pimephales promelas) adult reproductive tests. Environ Toxicol Chem 28(11): 2416–2425. 10.1897/08-601.1 [DOI] [PubMed] [Google Scholar]
  150. Menzie C, Smith R. 2018. August 9 Scientific integrity must rise above partisanship. SETAC Globe 19(8). https://globe.setac.org/scientific-integrity-must-rise-above-partisanship/ [Google Scholar]
  151. Meyer JL, Frumhoff PC, Hamburg SP, de la Rosa C. 2010. Above the din but in the fray: Environmental scientists as effective advocates. Front Ecol Environ 8(6): 299–305. 10.1890/090143 [DOI] [Google Scholar]
  152. Meyer JN, Francisco AB. 2013. A call for fuller reporting of toxicity test data. Integr Environ Assess Manag 9(2): 347–348. 10.1002/ieam.1406 [DOI] [PubMed] [Google Scholar]
  153. Michaels D 2008. Doubt is their product: How industry’s assault on science threatens your health. Oxford (UK): Oxford Univ Pr. 384 p. [Google Scholar]
  154. Moermond CTA, Kase R, Korkaric M, Ågerstrand M. 2016. CRED: Criteria for reporting and evaluating ecotoxicity data. Environ Toxicol Chem 35(5): 1297–1309. 10.1002/etc.3259 [DOI] [PubMed] [Google Scholar]
  155. Mozur MC. 2012. A global professional society. Integr Environ Assess Manag 8(1): 1 10.1002/ieam.1270 [DOI] [PubMed] [Google Scholar]
  156. Mudge JF, Barrett TJ, Munkittrick KR, Houlahan JE. 2012. Negative consequences of using α = 0.05 for environmental monitoring decisions: A case study from a decade of Canada’s Environmental Effects Monitoring Program. Environ Sci Technol 46(17): 9249–9255. 10.1021/es301320n [DOI] [PubMed] [Google Scholar]
  157. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA. 2017. A manifesto for reproducible science. Nat Hum Behav 1: 0021 10.1038/s41562-016-0021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Munkittrick KR, Arens CJ, Lowell RB, Kaminski GP. 2009. A review of potential methods for determining critical effect size for designing environmental monitoring programs. Environ Toxicol Chem 28(7): 1361–1371. 10.1897/08-376.1 [DOI] [PubMed] [Google Scholar]
  159. Murtaugh PA. 2007. Simplicity and complexity in ecological data analysis. Ecology 88(1): 56–62. 10.1890/0012-9658(2007)88[56:SACIED]2.0.CO;2 [DOI] [PubMed] [Google Scholar]
  160. Muscat JE, Huncharek MS. 2008. Perineal talc use and ovarian cancer: A critical review. Eur J Cancer Prev 17(2): 139–146. 10.1097/CEJ.0b013e32811080ef [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. [NAS] National Academy of Sciences. 1992. Responsible science: Ensuring the integrity of the research process Washington (DC): Natl Acad Pr. 224 p. 10.17226/1864 [DOI] [Google Scholar]
  162. [NAS] National Academy of Sciences. 2017. Fostering integrity in research. Washington (DC): Natl Acad Pr. 284 p. 10.17226/21896 [DOI] [Google Scholar]
  163. Nature Editors. 2014. September 17 Why high-profile journals have more retractions. Nature (News). 10.1038/nature.2014.15951 [DOI] [Google Scholar]
  164. Nature Editors. 2018a. China sets a strong example on how to address scientific fraud. Nature 558: 162 10.1038/d41586-018-05417-1 [DOI] [PubMed] [Google Scholar]
  165. Nature Editors. 2018b. Nature journals tighten rules on non-financial conflicts. Nature 554(6). 10.1038/d41586-018-01420-8 [DOI] [PubMed] [Google Scholar]
  166. Nature Medicine Editors. 2017. Take science off the stand. Nat Med 23(3): 265–265. 10.1038/nm.4303 [DOI] [PubMed] [Google Scholar]
  167. Nelson B 2009. Data sharing: Empty archives. Nature 461: 160–163. 10.1038/461160a [DOI] [PubMed] [Google Scholar]
  168. Nelson MP, Vucetich JA. 2009. On advocacy by environmental scientists: What, whether, why, and how. Conserv Biol 23(5): 1090–1101. 10.1111/j.1523-1739.2009.01250.x [DOI] [PubMed] [Google Scholar]
  169. Nichols T 2017. The death of expertise: The campaign against established knowledge and why it matters. Oxford (UK): Oxford Univ Pr. 252 p. [Google Scholar]
  170. Norton SB, Cormier SM, Suter GW II. 2002. The easiest person to fool. Environ Toxicol Chem 21(6): 1099–1100. 10.1002/etc.5620210601 [DOI] [PubMed] [Google Scholar]
  171. Norton SB, Rao L, Suter GW II, Cormier SM. 2003. Minimizing cognitive errors in site-specific causal assessments. Hum Ecol Risk Assess 9(1): 213–229. 10.1080/713609860 [DOI] [Google Scholar]
  172. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G et al. 2015. Promoting an open research culture. Science 348(6242): 1422–1425. 10.1126/science.aab2374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Nosek BA, Errington TM. 2017. Making sense of replications. eLife 6: e23383 10.7554/eLife.23383 [DOI] [PMC free article] [PubMed] [Google Scholar]
  174. [NRC] National Research Council (US). 2002. Integrity in scientific research: Creating an environment that promotes responsible conduct. Washington (DC): Natl Acad Pr. [PubMed] [Google Scholar]
  175. [NRC-CNRC]. National Research Council Canada. 2013. NRC’s research integrity policy. Ottawa (ON) 28 p. http://www.nrc-cnrc.gc.ca/eng/about/policies/research_integrity/ [Google Scholar]
  176. Nuzzo R 2015. October 8 How scientists fool themselves − And how they can stop. Nature 526: 182–185. 10.1038/526182a [DOI] [PubMed] [Google Scholar]
  177. Obama B 2009. Scientific integrity (Presidential documents, Memorandum of March 9, 2009). Fed Regist 74(46): 10671–10672. [Google Scholar]
  178. Ogden LE. 2016. Nine years of censorship. Nature 533: 26–28. 10.1038/533026a [DOI] [PubMed] [Google Scholar]
  179. Oreskes N, Carlat D, Mann ME, Thacker PD, vom Saal FS. 2015. Viewpoint: Why disclosure matters. Environ Sci Technol 49(13): 7527–7528. 10.1021/acs.est.5b02726 [DOI] [PubMed] [Google Scholar]
  180. Oreskes N, Conway EM. 2011. Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. London (UK): Bloomsbury Pr; 355 p. [Google Scholar]
  181. [ORI] The Office of Research Integrity. 2018. Definition of research integrity. [accessed 2018 Jan 28]. https://ori.hhs.gov/definition-misconduct
  182. Owen SF, Huggett DB, Hutchinson TH, Hetheridge MJ, McCormack P, Kinter LB, Ericson JF, Constantine LA, Sumpter JP. 2010. The value of repeating studies and multiple controls: Replicated 28-day growth studies of rainbow trout exposed to clofibric acid. Environ Toxicol Chem 29(12): 2831–2839. 10.1002/etc.351 [DOI] [PubMed] [Google Scholar]
  183. Parker KR, Wiens JA. 2005. Assessing recovery following environmental accidents: environmental variation, ecological assumptions, and strategies. Ecol Appl 15(6): 2037–2051. 10.1890/04-1723 [DOI] [Google Scholar]
  184. Parker TH, Forstmeier W, Koricheva J, Fidler F, Hadfield JD, Chee YE, Kelly CD, Gurevitch J, Nakagawa S. 2016. Transparency in ecology and evolution: Real problems, real solutions. Trends Ecol Evol 31(9): 711–719. 10.1016/j.tree.2016.07.002 [DOI] [PubMed] [Google Scholar]
  185. Paul AP, Garbarino JR, Olsen LD, Rosen MR, Mebane CA, Struzeski TM. 2016. Potential sources of analytical bias and error in selected trace element data-quality analyses. Reston (VA): US Geological Survey. Scientific Investigations Report 2016-5135; 68 p. http://pubs.er.usgs.gov/publication/sir20165135 [Google Scholar]
  186. Peterman RM, M’Gonigle M. 1992. Statistical power analysis and the precautionary principle. Mar Pollut Bull 24(5): 231–234. 10.1016/0025-326X(92)90559-O [DOI] [Google Scholar]
  187. Pielke RA Jr. 2007. The honest broker. New York (NY): Cambridge Univ Pr; 188 p. [Google Scholar]
  188. PLOS. 2014. Data availability. San Francisco (CA) [accessed 2017 Mar 2]. http://journals.plos.org/plosone/s/data-availability [Google Scholar]
  189. PLoS Medicine Editors. 2008. Making sense of non-financial competing interests. PLoS Med 5(9): e199 10.1371/journal.pmed.0050199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Power M, Power G, Dixon DG. 1995. Detection and decision-making in environmental effects monitoring. Environ Manage 19(5): 629–639. 10.1007/BF02471945 [DOI] [Google Scholar]
  191. Raloff J 2010. May 6 Atrazine paper’s challenge: Who’s responsible for accuracy? Science News (Science & the Public). https://www.sciencenews.org/blog/science-public/atrazine-paper%E2%80%99s-challenge-who%E2%80%99s-responsible-accuracy
  192. Rekdal OB. 2014. Academic urban legends. Soc Stud Sci 44(4): 638–654. 10.1177/0306312714535679 [DOI] [PMC free article] [PubMed] [Google Scholar]
  193. Renner R 2005. Proposed selenium standard under attack. Environ Sci Technol 39(6): 125A–126A. 10.1021/es053215n [DOI] [PubMed] [Google Scholar]
  194. Resnik DB, Neal T, Raymond A, Kissling GE. 2015. Research misconduct definitions adopted by U.S. research institutions. Account Res 22(1): 14–21. 10.1080/08989621.2014.891943 [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Resnik DB, Shamoo AE. 2011. The Singapore statement on research integrity. Account Res 18(2): 71–75. 10.1080/08989621.2011.557296 [DOI] [PMC free article] [PubMed] [Google Scholar]
  196. Robbins R 2017. Jan 10. A supplement maker tried to silence this Harvard doctor — And put academic freedom on trial. Boston (MA): STAT; [accessed 2017 Mar 6]. https://www.statnews.com/2017/01/10/supplement-harvard-pieter-cohen/ [Google Scholar]
  197. Roberts PD, Stewart GB, Pullin AS. 2006. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol Conserv 132(4): 409–423. 10.1016/j.biocon.2006.04.034 [DOI] [Google Scholar]
  198. Rohr JR, McCoy KA. 2010. Preserving environmental health and scientific credibility: A practical guide to reducing conflicts of interest. Conserv Lett 3(3): 143–150. 10.1111/j.1755-263X.2010.00114.x [DOI] [Google Scholar]
  199. Ross T 2017. January 28 Crime in the cancer lab. New York Times (Opinion). https://www.nytimes.com/2017/01/28/opinion/sunday/a-crime-in-the-cancer-lab.html?_r=1
  200. Ruff K 2015. Scientific journals and conflict of interest disclosure: What progress has been made? Environ Health 14(1): 1–8. 10.1186/s12940-015-0035-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Russell M, Boulton G, Clarke P, Eyton D, Norton J. 2010. The independent climate change email review. 160 p. http://www.cce-review.org/
  202. Santore RC, Ryan AC, Kroglund F, Teien HC, Rodriguez PH, Stubblefield WA, Cardwell AS, Adams WJ, Nordheim E. 2018. Development and application of a Biotic Ligand Model for predicting the toxicity of dissolved and precipitated aluminum. Environ Toxicol Chem 37(1): 70–79. 10.1002/etc.4020 [DOI] [PubMed] [Google Scholar]
  203. Sass JB, Castleman B, Wallinga D. 2005. Vinyl chloride: A case study of data suppression and misrepresentation. Environ Health Perspect 113(7): 809–812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  204. Schäfer RB, Bundschuh M, Focks A, von der Ohe PC. 2013. To the Editor [authors should make all their raw data accessible]. Environ Toxicol Chem 32(4): 734–735. 10.1002/etc.2140 [DOI] [PubMed] [Google Scholar]
  205. Schiermeier Q 2012. July 9 Arsenic-loving bacterium needs phosphorus after all. Nature (News). 10.1038/nature.2012.10971 [DOI] [Google Scholar]
  206. Schindler DW. 1998. Replication versus realism: The need for ecosystem-scale experiments. Ecosystems 1(4): 323–334. 10.1007/s100219900026 [DOI] [Google Scholar]
  207. Scott AP. 2012. Do mollusks use vertebrate sex steroids as reproductive hormones? Part I: Critical appraisal of the evidence for the presence, biosynthesis and uptake of steroids. Steroids 77(13): 1450–1468. 10.1016/j.steroids.2012.08.009 [DOI] [PubMed] [Google Scholar]
  208. Scott AP. 2018. Is there any value in measuring vertebrate steroids in invertebrates? Gen Comp Endocrinol 265: 77–82. 10.1016/j.ygcen.2018.04.005 [DOI] [PubMed] [Google Scholar]
  209. Scott JM, Rachlow JL. 2011. Refocusing the debate about advocacy. Conserv Biol 25(1): 1–3. 10.1111/j.1523-1739.2010.01629.x [DOI] [PubMed] [Google Scholar]
  210. Scott JM, Rachlow JL, Lackey RT, Pidgorna AB, Aycrigg JL, Feldman GR, Svancara LK, Rupp DA, Stanish DI, Steinhorst RK. 2007. Policy advocacy in science: Prevalence, perspectives, and implications for conservation biologists. Conserv Biol 21(1): 29–35. 10.1111/j.1523-1739.2006.00641.x [DOI] [PubMed] [Google Scholar]
  211. Sedlak D 2016. Crossing the imaginary line. Environ Sci Technol 50(18): 9803–9804. 10.1021/acs.est.6b04432 [DOI] [PubMed] [Google Scholar]
  212. Shaw D, Satalkar P. 2018. Researchers’ interpretations of research integrity: A qualitative study. Account Res 1–15. 10.1080/08989621.2017.1413940 [DOI] [PubMed] [Google Scholar]
  213. Shrader-Frechette K 2012. Research integrity and conflicts of interest: The case of unethical research-misconduct charges filed by Edward Calabrese. Account Res 19(4): 220–242. 10.1080/08989621.2012.700882 [DOI] [PubMed] [Google Scholar]
  214. Skorupa JP, Presser TS, Hamilton SJ, Lemly AD, Sample BE. 2004. EPA’s draft tissue-based selenium criterion: A technical review. Washington (DC): US Fish and Wildlife Service, US Geological Survey, US Forest Service, CH2M Hill; 35 p. https://wwwrcamnl.wr.usgs.gov/Selenium/library.htm [Google Scholar]
  215. Smaldino PE, McElreath R. 2016. The natural selection of bad science. Royal Soc Open Sci 3(9). 10.1098/rsos.160384 [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Smith R 2006. Conflicts of interest: How money clouds objectivity. J Royal Soc Med 99(6): 292–297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  217. Smith R, Roberts I. 2016. Time for sharing data to become routine: The seven excuses for not doing so are all invalid [version 1; referees: 2 approved, 1 approved with reservations]. F1000Research 5(781). 10.12688/f1000research.8422.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  218. Snyder BF, Hooper-Bui LM. 2018. A teachable moment: The relevance of ethics and the limits of science. BioScience bix165-bix165 10.1093/biosci/bix165 [DOI] [Google Scholar]
  219. Solomon KR, Carr JA, Du Preez LH, Giesy JP, Kendall RJ, Smith EE, Van Der Kraak GJ. 2008. Effects of atrazine on fish, amphibians, and aquatic reptiles: A critical review. Crit Rev Toxicol 38(9): 721–772. 10.1080/10408440802116496 [DOI] [PubMed] [Google Scholar]
  220. Sprague LA, Oelsner GP, Argue DM. 2017. Challenges with secondary use of multi-source water-quality data in the United States. Water Res 110: 252–261. 10.1016/j.watres.2016.12.024 [DOI] [PubMed] [Google Scholar]
  221. Stedeford T 2007. Prior restraint and censorship: Acknowledged occupational hazards for government scientists. William and Mary Environ Law Policy Rev 31(3): 725–745. [Google Scholar]
  222. Stein R, Eilperin J. 2010. December 10 Obama administration issues guidelines designed to ensure “scientific integrity.” Washington Post (Politics). http://www.washingtonpost.com/wp-dyn/content/article/2010/12/17/AR2010121705774.html?noredirect=on
  223. Steneck NH. 2006. Fostering integrity in research: Definitions, current knowledge, and future directions. Sci Eng Ethics 12(1): 53–74. 10.1007/PL00022268 [DOI] [PubMed] [Google Scholar]
  224. Stern V 2017. October 2 Updated: Why would a university pay a scientist found guilty of misconduct to leave? Science (News). 10.1126/science.aan7182 [DOI] [Google Scholar]
  225. Stokstad E 2012. July 24 Fracking report criticized for apparent conflict of interest. Science (News). http://www.sciencemag.org/news/2012/07/fracking-report-criticized-apparent-conflict-interest
  226. Sumner P, Vivian-Griffiths S, Boivin J, Williams A, Venetis CA, Davies A, Ogden J, Whelan L, Hughes B, Dalton B, Boy F, Chambers CD. 2014. The association between exaggeration in health related science news and academic press releases: Retrospective observational study. BMJ 349 10.1136/bmj.g7015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  227. Sumpter JP, Donnachie RL, Johnson AC. 2014. The apparently very variable potency of the anti-depressant fluoxetine. Aquat Toxicol 151: 57–60. 10.1016/j.aquatox.2013.12.010 [DOI] [PubMed] [Google Scholar]
  228. Sumpter JP, Scott AP, Katsiadaki I. 2016. Comments on Niemuth, N.J. and Klaper, R.D. 2015. Emerging wastewater contaminant metformin causes intersex and reduced fecundity in fish; Chemosphere 135, 38–45. Chemosphere 165: 566– 569. 10.1016/j.chemosphere.2016.08.049 [DOI] [PubMed] [Google Scholar]
  229. Suter GW II, Cormier SM. 2015a. Bias in the development of health and ecological assessments and potential solutions. Hum Ecol Risk Assess 22(1): 99–115. 10.1080/10807039.2015.1056062 [DOI] [Google Scholar]
  230. Suter GW II, Cormier SM. 2015b. The problem of biased data and potential solutions for health and environmental assessments. Hum Ecol Risk Assess 21(7): 1736–1752. 10.1080/10807039.2014.974499 [DOI] [Google Scholar]
  231. Suter GW II, Norton SB, Cormier SM. 2002. A method for inferring the causes of observed impairments in aquatic ecosystems. Environ Toxicol Chem 21(6): 1101–1111. 10.1002/etc.5620210602 [DOI] [PubMed] [Google Scholar]
  232. Tollefson J 2015. Earth science wrestles with conflict-of-interest policies. Nature 522: 403–404. 10.1038/522403a [DOI] [PubMed] [Google Scholar]
  233. Topf A 2016. July 10 Imperial Metals sues engineering companies over Mount Polley disaster. Mining.com [accessed 2017 Feb 7]. http://www.mining.com/imperial-metals-sues-engineering-companies-mount-polley-disaster/
  234. US Department of Interior. 2014. Code of scientific and scholarly conduct [poster]. [accessed 2017 Feb 26]. https://www.doi.gov/scientificintegrity/upload/DOI-Code-of-Scientific-and-Scholarly-Conduct-Poster-December-2014.pdf
  235. [USEPA] US Environmental Protection Agency. 2013. Aquatic life ambient water quality criteria for ammonia − Freshwater 2013. Washington (DC) EPA 822-R-13-001. 255 p. http://water.epa.gov/scitech/swguidance/standards/criteria/aqlife/ammonia/index.cfm [Google Scholar]
  236. Van Der Kraak GJ, Hosmer AJ, Hanson ML, Kloas W, Solomon KR. 2014. Effects of atrazine in fish, amphibians, and reptiles: An analysis based on quantitative weight of evidence. Crit Rev Toxicol 44(sup5): 1–66. 10.3109/10408444.2014.967836 [DOI] [PubMed] [Google Scholar]
  237. van Iersel S, Swart EM, Nakadera Y, van Straalen NM, Koene JM. 2014. Effect of male accessory gland products on egg laying in gastropod molluscs. J Vis Exp 88: e51698 10.3791/51698 [DOI] [PMC free article] [PubMed] [Google Scholar]
  238. Van Kirk RW, Hill SL. 2007. Demographic model predicts trout population response to selenium based on individual-level toxicity. Ecol Model 206(3–4): 407–420. 10.1016/j.ecolmodel.2007.04.003 [DOI] [Google Scholar]
  239. van Kolfschooten F 2002. Conflicts of interest: Can you believe what you read? Nature 416(6879): 360–363. [DOI] [PubMed] [Google Scholar]
  240. Van Noorden R 2014. February 24 Publishers withdraw more than 120 gibberish papers. Nature (News). 10.1038/nature.2014.14763 [DOI] [Google Scholar]
  241. Vines TH, Albert AYK, Andrew RL, Débarre F, Bock DG, Franklin MT, Gilbert KJ, Moore J-S, Renaut S, Rennison DJ. 2014. The availability of research data declines rapidly with article age. Curr Biol 24(1): 94–97. [DOI] [PubMed] [Google Scholar]
  242. Vosoughi S, Roy D, Aral S. 2018. The spread of true and false news online. Science 359(6380): 1146–1151. 10.1126/science.aap9559 [DOI] [PubMed] [Google Scholar]
  243. Wadman M 1997. $100m payout after drug data withheld. Nature 388(6644): 703–703. [DOI] [PubMed] [Google Scholar]
  244. Wagner WE. 2005. The perils of relying on interested parties to evaluate scientific quality. Am J Public Health 95(S1): S99–S106. 10.2105/AJPH2004.044792 [DOI] [PubMed] [Google Scholar]
  245. Wagner WE, Michaels D. 2004. Equal treatment for regulatory science: Extending the Controls governing the quality of public research to private research. Am J Law Med 30(2–3): 119–154. 10.1177/009885880403000202 [DOI] [PubMed] [Google Scholar]
  246. Weltje L, Sumpter JP. 2017. What makes a concentration environmentally relevant? Critique and a proposal. Environ Sci Technol 51: 11520–11521. 10.1021/acs.est.7b04673 [DOI] [PubMed] [Google Scholar]
  247. Whaley P, Halsall C, Ågerstrand M, Aiassa E, Benford D, Bilotta G, Coggon D, Collins C, Dempsey C, Duarte-Davidson R et al. 2016. Implementing systematic review techniques in chemical risk assessment: Challenges, opportunities and recommendations. Environ Int 92–93: 556–564. 10.1016/j.envint.2015.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  248. Wiens JA, Parker KR. 1995. Analyzing the effects of accidental environmental impacts: Approaches and assumptions. Ecol Appl 5(4): 1069–1083. 10.2307/2269355 [DOI] [Google Scholar]
  249. Wise J 1997. Research suppressed for seven years by drug company. BMJ 314(7088): 1145–1145. 10.1136/bmj.314.7088.1145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  250. Womack RP. 2015. Research data in core journals in biology, chemistry, mathematics, and physics. PLoS One 10(12): e0143460 10.1371/journal.pone.0143460 [DOI] [PMC free article] [PubMed] [Google Scholar]
  251. Young NS, Ioannidis JPA, Al-Ubaydli O. 2008. Why current publication practices may distort science. PLoS Med 5(10): e201 10.1371/journal.pmed.0050201 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Sup1

RESOURCES