Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2019 Jul;109(7):978–980. doi: 10.2105/AJPH.2019.305142

Public Health and Independent Risk Assessment

Paolo Vineis 1,
PMCID: PMC6603471  PMID: 31166731

“When the legislative and executive powers are united in the same person, or in the same body of magistrates, there can be no liberty; because apprehensions may arise, lest the same monarch or senate should enact tyrannical laws, to execute them in a tyrannical manner.”

C.-L. Montesquieu, The Spirit of the Laws, Book XI1

A general problem of contemporary societies is how to synthesize and discretize evidence that is extremely sparse and difficult to manage, do it in a transparent way, and transfer the conclusions into policy decisions. I show here how the monographs of the International Agency for Research on Cancer (IARC) are an ingenious way to address this complex issue.

IARC MONOGRAPHS

In the early 1970s, Lorenzo Tomatis (1929–2007), who later became director of the IARC, was confronted with the emerging problem of identifying the causes of cancer. More specifically, the demand from the World Health Organization (WHO) and policymakers was for a “list” of carcinogens. Tomatis worked under the assumption that any list, to be authoritative, should have a strong methodological background and be based on consensus. This led to the IARC Monographs, a procedure that has been effective, allowing a credible and transparent evaluation of carcinogenicity for many hundreds of chemicals. The adjectives “credible” and “transparent” are important because this is what policymaking needs, instead of unclear and often authoritarian opinions of single experts. The IARC Monographs represent for public health what the Cochrane Collaboration and the Evidence-Based Medicine movement have represented for clinical medicine (and, I would add, with similar problems and limitations).

There are three main features that make the Monographs unique in providing evidence for policymaking: (1) evidence is synthesized according to well-established procedures (instead of providing “lists” based on expert opinion alone); (2) evidence from different sources (humans, animals, mechanistic essays, biomarkers) is consistently used; (3) such evidence is then summarized in discrete categories (carcinogenic to humans; probably carcinogenic; possibly carcinogenic; evidence does not allow a classification).

Most importantly, the original evidence is clearly distributed along a continuous scale (as the evaluation of the evidence on carcinogenicity of glyphosate suggests), and includes mutagenicity tests in bacteria, chromosome aberrations, biomarkers such as DNA adducts, experiments in rodents and other experimental animals, and a wide variety of epidemiological studies. The latter are also distributed along a scale related to quality of evidence, from good prospective studies with excellent exposure assessment and accurate identification of outcomes to poorly designed cross-sectional studies with major limitations. The IARC Monographs have made a (generally successful) effort to translate this disparate and unmanageable collection of evidence into a simple classification used for practical purposes.

There are, of course, issues with the Monograph approach, and probably some of the procedures could be improved. Like all simple classifications, real-life situations do not always fall into one specific category. Also, in terms of communication, some terms may create problems. In particular, “possibly carcinogenic” is not an easy wording to manage in public communication and by the press when referring to carcinogenicity. Everything that is not impossible is possible; therefore, the word is too generic. In addition, the press and the public should become accustomed to the subtleties behind the classification—that is, be aware that behind a certain adjective there are nuances in the evidence.

The public and the press should in fact become accustomed to the fact that science is affected by uncertainty. The difference from other disciplines is that in scientific work we try to make uncertainty visible, to quantify it, and to narrow it through further research (for this reason, the Monographs have a periodic update program). There is a vast amount of literature on uncertainty in the regulation of environmental hazards—for example, in the procedures of the US Environmental Protection Agency (EPA). As stated in an eloquent document of the US Institute of Medicine,

As part of that mission, EPA estimates the nature, magnitude, and likelihood of risks to human health and the environment; identifies the potential regulatory actions that will mitigate those risks and protect public health and the environment; and uses that information to decide on appropriate regulatory action. Uncertainties, both qualitative and quantitative, in the data and analyses on which these decisions are based enter into the process at each step. As a result, the informed identification and use of the uncertainties inherent in the process is an essential feature of environmental decision making.2(p1)

In fact, social scientists remind us that procedures for risk management go beyond scientific evidence and imply the contribution of social values and social theory.3 In addition, uncertainty is clearly embedded in clinical reasoning, decision-making, and negotiation with the patient.4

CHECKS AND BALANCES OF POWER

Montesquieu’s theory on the separation between legislative, executive, and judiciary powers creates a system of checks and balances that is the main guarantor against abuse and tyranny. However, when Montesquieu wrote his theory (in the 1740s), three other forms of power were still in their infancy: economic power, science and technology, and information. The article by Rosner et al. on asbestos in talc, in this issue of AJPH (p. 969), clearly describes, on the basis of a single but illustrative example, the complex intertwining of these three elements.

The greatest threats to contemporary democracies probably come from the lack of checks and balances across those three (new) powers and the political powers. Particularly dangerous are the tyranny of money and the tyranny of unchecked information. Scientific information—as the example put forward by Rosner et al. shows—may not be independent, and in turn this hampers a frank and transparent evaluation of risks and benefits associated with technologies.

Transparency in the transfer, explanation, and evaluation of scientific research is in fact a challenge. Science often has fuzzy boundaries and does not come in the form of discrete and simple categories for fruition by policymakers (like carcinogenic vs not carcinogenic). President Truman reportedly once remarked that he was searching for an expert with one arm only, because he was tired of hearing experts saying “on one hand, on the other hand.” Decision-makers and the public need discrete and simple solutions to problems, and tend to be unhappy about the subtleties and distinctions of academics. Simple solutions often do not exist for complex problems, but nevertheless the plea from policymakers is understandable and legitimate. The IARC Monographs are an effective way of safeguarding transparency in science.

The IARC Monographs also offer the needed independent evaluation of risks. Today, the public debate is affected by much noise and sometimes by lack of confidence in science. However, choices cannot be made exclusively on the basis of opinion polls or vested interests. The case of vaccines dramatically exemplifies how the lack of evidence-based reasoning and scientific education by sectors of the public grossly distorts evaluations and choices. If we accept the need for credible and transparent ways of synthesizing scientific evidence, the IARC Monographs are a good if not unique example. They are consistent with a more general interpretation of health as a common good—that is, a complex society needs to protect the health of the citizens, allowing the best science to be used in decision-making and preventing conflicts of interest.5 Indeed, the Monographs are protected against conflict of interest. As defined by Lo and Field in 2009, “A conflict of interest is a set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest.”5(p46) Box 1 shows an elementary example of conflict of interest: in a review of the carcinogenic potential of glyphosate, there is a merging of economic interest, scientific research, and risk assessment.6 In the case of public health, the primary interest is the health of the population, whereas secondary interests can be profit or career. The moral integrity of the scientist consists of recognizing and avoiding conflicts of interest. Unfortunately, these simple principles are not universally accepted, as numerous cases suggest.7

AN EXAMPLE OF CONFLICTS OF INTEREST IN AN ARTICLE ON GLYPHOSATE

DECLARATION OF INTEREST

Gary M. Williams, Sir Colin Berry, David Brusick, João Lauro Viana de Camargo, Helmut A. Greim, David J. Kirkland, Keith R. Solomon, and Tom Sorahan have previously served as independent consultants for the Monsanto Company on the European Glyphosate Task Force. John Acquavella and Larry D. Kier have also served as independent consultants and were previously employees of the Monsanto Company. John Acquavella was employed by Monsanto between the years 1989 and 2004 while Larry D. Kier was employed between 1979 and 2000.

This article is part of a supplement, sponsored and supported by Intertek Scientific & Regulatory Consultancy. Funding for the sponsorship of this supplement was provided to Intertek by the Monsanto Company, which is a primary producer of glyphosate and products containing this active ingredient.

Source. Williams et al.6

CONFLICTS OF INTEREST

The author has no financial conflict of interest to declare. He has participated in several Working Groups of the Monographs of the International Agency for Research on Cancer.

Footnotes

See also Morabia, p. 955; Rosner et al., p. 969; Michaels, p. 975; Samet, p. 976; Rodenberg, p. 980; and Singla et al., p. 982.

REFERENCES

  • 1.Nugent T, editor. The Spirit of the Laws. vol. 1. trans-ed. London, England: J. Nourse; 1777. pp. 221–237. [Google Scholar]
  • 2.Institute of Medicine. Environmental Decisions in the Face of Uncertainty. Washington, DC: National Academies Press; 2013. [PubMed] [Google Scholar]
  • 3.Funtowicz SO, Ravetz JR. Science for the post-normal age. Futures. 1993;25(7):739–755. [Google Scholar]
  • 4.Schwarze ML, Taylor LJ. Managing uncertainty—harnessing the power of scenario planning. N Engl J Med. 2017;377(3):206–208. doi: 10.1056/NEJMp1704149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Lo B, Field MJ. Conflict of Interest in Medical Research, Education, and Practice.Washington, DC: National Academies Press; 2009. Cited in: Vineis P, Saracci R. Conflicts of interest matter and awareness is needed. J Epidemiol Community Health. 2015;69(10):1018–1020. [DOI] [PubMed]
  • 6.Williams GM, Aardema M, Acquavella J et al. A review of the carcinogenic potential of glyphosate by four independent expert panels and comparison to the IARC assessment [published erratum appears in Crit Rev Toxicol. November 30, 2018:1–2] Crit Rev Toxicol. 2016;46(suppl 1):3–20. doi: 10.1080/10408444.2016.1214677. [DOI] [PubMed] [Google Scholar]
  • 7.Terracini B, Mirabelli D. Asbestos and product defence science. Int J Epidemiol. 2016;45(3):614–618. doi: 10.1093/ije/dyw136. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES