Abstract
Trust between the lay public and scientific experts is a key element to ensuring the efficient implementation of emergency public health measures.
In modern risk societies, the management and elimination of risk have become preeminent drivers of public policy. In this context, the protection of public trust is a complex task. Those actors involved in public health decision-making and implementation (e.g., mass vaccination for influenza A virus) are confronted with growing pressures and responsibility to act. However, they also need to accept the limits of their own expertise and recognize the ability of lay publics to understand and be responsible for public health.
Such a shared responsibility for risk management, if grounded in participative public debates, can arguably strengthen public trust in public health authorities and interventions.
The influenza A (H1N1) virus pandemic was not as devastating as expected, so the preventive health measures that were deployed to cope with the outbreak are now being challenged.1,2 Questions remain about the appropriateness of large-scale population vaccination programs, such as those promoted as the best response to the expected influenza epidemic in spring 2009. Large-scale vaccination involves considerable financial (and other resource) costs for governments, and the political decision to make such an investment in public health is not without repercussions. In particular, if such decision-making processes are not fully transparent and well justified—for example, if accusations of conflict of interest arise, as was the case with World Health Organization (WHO) recommendations2,3—public trust in the resulting public health program or intervention can be threatened. Clearly, an erosion of public trust in the judgments of public health authorities (whether they be local, national, or international) can have serious negative consequences on the future implementation of other emergency response programs.4
To respond to this problem of a loss of (or weakened) public trust, we must understand its sociocultural and historical origins. The examination of past implementations of emergency programs can hopefully help us understand our strengths and faults and eventually serve as tools for continuously improving our management of public health in such emergency situations. Neustadt and Fineberg’s book on the 1976 swine flu “affair” is a good example of how critical of ourselves we should be to react better to such crises in the future.5 For this same purpose, and by placing a sociological macroscopic lens over a particular recent crisis, we are presenting a case analysis of the 2009 H1N1 flu pandemic.
Drawing on the literature in the social sciences, we have affirmed that the management of health crises is necessarily also the management of human crises. We integrate reflections from contemporary bioethics and political philosophy, in line with views about the responsibility of decision makers in democratic states. First, we argue that the concept of trust (e.g., by the public in health experts) should be situated in the context of modernity—namely, in a risk society in which the public and policymakers are increasingly concerned with safety and the maximal reduction of certain risks. Second, we highlight a close relation between risk perception (known to be subjective) and risk assessment (expected to be objective), which deserves special attention, given the important role played by experts in the management of public health. Third, we suggest that public health actors (professionals, science advisers, policymakers) need to accept the limits of their own expertise (and of its objectivity) and responsibility and recognize the ability of lay publics to understand and to take responsibility for their public health. As a consequence, we argue that public health actors should engage more actively in ongoing participative and deliberative public debates both to preserve and to strengthen public trust toward public health authorities and interventions.
We do not aim to judge or to hold accountable the individuals for the decisions that were made during the crisis, which is beyond the scope of this article and, to our minds, less interesting than examining the structural elements that make such behavior “the norm” for experts and decision makers and thus lead repeatedly to situations like H1N1. If we simply focus on pointing fingers at a few individuals to be held personally responsible, we miss the larger social dynamics that arguably generated the problematic inconsistencies between the messages put forward by public health experts and what was understood or accepted by the general public.
THE RISK SOCIETY AND REFLEXIVE MODERNITY
According to the German sociologist Niklas Luhmann, trust is a “mechanism for the reduction of social complexity.”6 Trust is necessary for social cohesion. In fact, it is indeed rare that human relationships are built strictly from scientifically demonstrable facts. Trust and relationships therefore involve some kind of faith without which we would probably live in a state of constant paranoia.7 Trust is closely linked to the concept of risk. An implicit risk-taking occurs in transferring responsibility to someone else who we believe can make decisions for us with good care and judgment.8 According to Ulrich Beck and Anthony Giddens, risk is central in the organization of modern society. They speak of the risk society, which for Beck is a “systematic way of dealing with hazards and insecurities induced and introduced by modernization itself”9(p21) and for Giddens expresses a “society increasingly preoccupied with the future (and also with safety), which generates the concept of risk.”10(p3) This conception of society is the foundation of what is now known as the theory of reflexive modernity, which describes a society that is extremely self-conscious and possesses the means—scientific, technical, statistical, and communicative—to challenge, to criticize, and even to anticipate the ecological and social consequences that it causes.11 In this society of risk, we see a new form of solidarity and new political arguments based on universal concerns. Hence new paradigms that use risk-based approaches such as sustainability and the precautionary principle are developed and become increasingly powerful in public policy and in the implementation of public health measures and interventions.
RISK AND PREVENTION IN PUBLIC HEALTH
Although popularized by environmentalists, the maxim of the least risk evidenced in the precautionary principle is increasingly used in the field of public health. The precautionary principle is reflected by precautionary policies that involve the establishment of
surveillance, detection, management of emerging threats to health…; mechanisms for managing sanitary crises and active preparation for the potentiality of these crises; specific techniques for governing the potential risks.12(p15)
Risk assessment is indeed unquestionably central to the management and protection of public health. However, the degree of importance given to the management of risk can vary depending on the perception of risk. When it comes time to decide on the acceptability of a risk, some argue, “when in doubt, leave it out,” whereas others claim, “nothing ventured, nothing gained.” In either case, risk approaches require consideration of subjectivity in risk perception. According to Audétat, the rational and technical expertise of the scientist is not sufficient for good risk assessment, and “acceptability is always based on social concerns, which can vary … risk assessment is a matter of negotiation.”13(p97) Such a negotiation requires that the weight of opinion from different groups in society be well balanced and properly represented when important decisions in public health are being made.
The question of risk assessment by international health authorities thus raises an important problem: the unidirectional “construction” of risk that moves from the expert to the citizen without attention to public perception.14 Larson and Heymann4 argue that to preserve trust in politics and in the field of public health, it is insufficient to identify the problem or issue at stake and then design and implement a solution; it is also necessary to clarify who should understand the problem (e.g., policymakers and health professionals but also interest groups and the broader public) and reflect on how best to communicate relevant risk information and justify proposed interventions. This will, they suggest, require an active listening by decision makers to the interests of citizens.4 Otherwise, there may be an “asymmetry” in the definition of risks that can have adverse effects on risk perception.13 For instance, the identification and announcement of a risk such as the imminence of a pandemic may be perceived (or interpreted) differently and receive various degrees of attention from health authorities and population groups according to the respective knowledge and experience that these stakeholders have with such crises but also with the conceptual terms used.15 It becomes important, then, to question the power relationship between experts and the public in risk communication; words such as pandemic are powerful and should be defined precisely, be used cautiously, and be properly understood.
In spring 2009, just before the declaration of a pandemic (Phase 6) of the H1N1 influenza virus, the WHO definition of the term pandemic was amended from one that previously included incidence (frequency of occurrence, distribution) and severity (lethality) of pandemic disease to a definition that included only the incidence.16 Without this change in definition, the H1N1 pandemic might not have been declared, and the measures taken by national public health authorities probably would not have been as dramatic.
FROM CONCEPT TO POLITICS: RISK MANAGEMENT OF H1N1
A key role of public health authorities is to inform the population about the risks it faces and ensure that citizens have understood the issues. This understanding requires the use of clear and precise technical terms by both parties, and obviously, this was not the case with the 2009 H1N1 virus pandemic. The media played a big role in the poor communication between experts and citizens, by concentrating attention on cases of severe complications of H1N1. By focusing attention on a few atypical cases, the reality of the disease (severity) was distorted, and so the media did not give a fair representation of the risk.17 If the media presentation was distorted, we believe the attention given to H1N1 by some experts was also disproportionate.
Our growing ability to predict risks has led to the implementation of accountability systems and health policies that have almost completely internalized (for public health actors) the duty to prevent these risks.18 A precautionary approach has become a “duty of precaution.”12 However, as noted by Gilbert,14 such power to anticipate risks is often associated with a particular group of actors and, at the same time, with “their logic and cognitive frameworks, interests, etc.” The direct association of this feeling of responsibility with the authority positions and expert roles of these actors results in the construction of risks.14
Subjectivity in the perception of risk is normal and is shared by us all. One of the best examples of the influence of responsibility in risk perception can be observed in the parent-child relationship. It seems normal and natural that in desiring to protect one’s child, a parent feels particularly responsible for the risks incurred and at the same time becomes “hypersensitive” to the risk itself. In the case of public health, however, can a similar level of paternalism and responsibility be justified between expert and layperson, between the public health authority and the citizen, and between governments? One could imagine easily that a country that would simply not pay attention to a pandemic alert by the WHO would be treated by other countries as irresponsible and as a threat to the minimization of the virus spreading worldwide. With regard to the latter possibility, recommendations that are being made by international organizations such as the WHO can implicitly become more than recommendations, and to some extent, they can have a coercive power on governments at the political level because of international diplomacy and the close relationship that exists today between the countries.
However, should we accept, as citizens, to become mere “low-level actors”13 in decisions concerning public health policies because these are made by experts on behalf of the public and in the interest of the common good? Obviously, this is not to question the need for the specialized knowledge and expertise of public health professionals; the need for such expertise is evident. In some cases, however, it is questionable whether the level of power and authority attributed to the expert systems (health professionals, science advisory groups, national or international health agencies) involved in decision-making is appropriate. As Tabuteau notes,
this need for expertise in health is reflected by the establishment of an ambiguous relationship between the expert and the authority.18(p34)
We must be aware that the value of expertise is gained through its social acceptance. In the end, the citizens will decide whether to get the vaccine or even if they will pay attention to the threat and take simple precautionary measures such as washing hands often or staying home when feeling sick.
CONFLICTS OF INTEREST, CONCERN BIAS, AND POLITICAL POWER
The recent H1N1 crisis illustrated how some groups of experts could come to have great influence over public health decision makers and the subsequent development of public health policies. In some cases, scientific expertise can become so powerful that its influence seems transformed into a real “delegation of the decision to the experts.”19 Indeed, when experts pronounce a health risk such as H1N1, it can seem irresponsible not to comply with their suggested recommendations. Thus, the decision makers, who are expected to be at the level of the governments, can become impotent vectors of a particular view of risk governance; and by the same token, they can fail in their mandate to fairly and democratically represent the interests of the entire population.
Moreover, many types of conflict of interest can be detected in the various roles and relationships of the expert. Besides those interests that are strictly financial (e.g., stock options, research contracts), interests such as the expert’s scientific affiliations, his or her place in the scientific world (junior or senior scholar, university or public sector researcher), and his or her personal relationships can also bias to a greater or lesser extent his or her objectivity and may therefore endanger the public trust.18,20,21 But a conflict of interest is not the only threat that could disrupt the quality of expertise; the bias of interest, or concern bias,16 also may be a factor. Most often, unintentional concern bias arises when expert advice is given following a scientific evaluation that is too focused on the subject of study. It can occur, for example, when a recommendation is made in the absence of counterarguments or expertise that reflects alternative points of view on the question at hand. This bias can also occur when a scientific group has a “collective blindness” to some of the factors necessary to consider in solving a problem (i.e., when the expert “sins through self-centeredness”).18 The change in definition of the term pandemic by the WHO is a striking example. By removing the lethal character of pandemic disease from the definition, the WHO literally corrupted the traditional boundaries of risk perception for the entire medical community, with the result that H1N1 came to be perceived as a much greater risk than it actually was.
Besides the strictly “top-bottom” influence of the experts’ biases, a “bottom-up” influence may occur and affect the quality of the expertise on a second level. Experts carry heavy responsibilities when assessing the risk for the health of the population and are subject to enormous pressure to be both efficient and accurate. In public health emergencies such as a pandemic threat, experts may be placed in a position in which they prefer to adopt a “worse case model” for risk assessment instead of a best or a medium case model. It is understandable that experts can fear betraying the public trust by underestimating a possible threat to public health and that this feeling can alter their risk perception and therefore bias their risk assessment and recommendations for risk management. Thus, the very high expectations that citizens have toward the few experts who should keep us healthy can paradoxically weaken the quality of expertise. Both the top-bottom and the bottom-up sources of biases are signs of a vertical relationship between experts and citizens. To ensure the best expertise and effective decision-making, we believe a more horizontal relationship—one that better shares the responsibilities for health management among the various stakeholders—should be favored.
The risk society is arguably a key factor in this unreasonable focus on risks that have become perceived as unacceptable. If the public health authorities such as the WHO are prey, it should not be surprising then to see policymakers or governments caught up in a quest to manage or eliminate all risks. According to Beck,22 when choices are between
solutions also dangerous [but] where the risks are too qualitatively different to be easily compared … governments adopt strategies of deliberate simplification;
the result is a minimization of the uncertainty of the recommendations made by expert groups and a focusing of attention on the risk that has been predetermined as unacceptable. Thus, in the case of the H1N1 crisis, almost all attention from public health professionals, scientific experts, governments, and mass media was centralized on the disease outbreak; very little attention was paid to the risks that might be associated with the public health interventions (e.g., financial investments, conflicts of interest, possible vaccine side effects, distributive justice issues regarding vulnerable populations). Such a vision of social problems and their effective management is overly simplistic, reductionist, and grossly paternalistic. It does not provide citizens with the truth of the situation and so impedes their lucid participation in debates about the implementation of acceptable public health politics. In fact, the citizen is simply not involved in the process of determining the acceptability of the risks that he or she ultimately must face and the burden he or she will then bear should the risks be actualized.
CONCLUSIONS
When public health authorities detect a new risk and inform the population, they must ensure that the issues have been well understood by the full range of stakeholders involved. This should not mean simply informing the citizen of the risk. It should also take into account the different perceptions that citizens may have of the risk they are facing. The WHO indeed sounded the alarm about the H1N1 virus pandemic but did not ensure that the message would be properly understood. Their approach to risk governance, and in particular the change to the definition of pandemic, undermined the possibility for the ethical and effective communication of risks. In so doing, they undermined the trust of the public (and many public health actors) in the WHO’s authority and even their legitimacy to hold a leadership role in protecting global public health.
To improve and preserve public trust in public health actors (scientific experts, policymakers), the governance of risk—and the associated responsibilities—must be fairly distributed among all actors involved. It is critical to minimize the effect of concern bias on expert-guided policymaking; as Beck23 noted, the science of experience and public discourse must be in harmony with scientific data. However, the issue of respect for democracy requires the protection of public involvement in the governance of public health.24 Thus, the sociocultural and collective responses that can exert an undue influence on judgments about the acceptability of risk must be recognized and mitigated; if the need for expertise is evident,
assessment should be confined to its role as decision support and not intrude by a messianic reflex on the scope of the decision maker.18
Otherwise, as Audétat noted, we are left with a situation where
In public controversies, debates about the multiple challenges of technological risks are frequently impoverished by certain types of language, reduced to “technical” problems or reserved for certain actors. The debate often boils down to a battle between those who denounce risks as unacceptable and those who repeat that “there is no zero risk.” The discourse between rationality and irrationality, and subjectivity and objectivity of risk tends to polarize and to impoverish the public debate. In some circumstances, objectivity is simply a screen behind which there occurs intense haggling between experts and stakeholders.13(p108)
Therefore, a bidirectional transparency must be established between the expert and the layperson when discussing and assessing the public acceptability of risks. This transparency implies recognition of the existing uncertainties and communication of the conceptual approaches used for decision-making. As Resnik affirmed,
It is important to have a better understanding of words and phrases used in scholarly and public debates, even when we think we know what they mean. Clarity is a virtue.20(p2)
Attention to transparency also recognizes the importance of an active listening to the interests and questions from the public. Sufficiently informing citizens about public health issues is essential to promoting citizens’ involvement in their personal (re)conceptualization of risks.4 This clearly involves enormous work with the media but also with regional public health agencies, science foundations, and key media personalities (prominent scientists or bioethicists) to translate information that might sometimes be confusing. It involves accepting that the public is a legitimate actor in health policymaking and not one that should be invariably treated as stupid or ignorant. They may be uninformed, but that does not mean they cannot be informed. Thus, communication with the public is also the way that the public will learn, be empowered, and be able to get beyond the polemic or extremist presentations that inflame concern and undermine an active public engagement in the challenges faced by all parties in a public health emergency. Thus, the issue comes down to trust on both sides, on the part of experts and policymakers in the public and of the public in experts and policymakers. A thoughtful and dynamic trust-action can be achieved then, instead of a passive trust-feeling.25 Gradually, such deliberative and participatory policies would hopefully take over from the current reactionary policies that are the source of so many distortions and threats to social cohesion.
Acknowledgments
B. Williams-Jones’ research was supported by grants from the Quebec Fonds de recherche sur la société et la culture and the Ethics Office of the Canadian Institutes of Health Research.
The authors would like to thank Elise Smith for her invaluable comments on drafts of this article.
Human Participant Protection
No protocol approval was necessary because human participants were not involved in this study.
References
- 1.Iten A, Kaiser L. A. (H1N1) 2009 et incertitudes: leçons d’une pandémie. Rev Med Suisse. 2010;6:704–707 [PubMed] [Google Scholar]
- 2.Cohen D, Carter P. Conflicts of interest: WHO and the pandemic flu “conspiracies.” BMJ. 2010;340:c2912. [DOI] [PubMed] [Google Scholar]
- 3. UN News Centre. Top UN health official refutes conflict of interest claims in handling of H1N1 pandemic. June 8, 2010. Available at: http://www.un.org/apps/news/story.asp?NewsID=34954&Cr=world+health+organization&Cr1=. Accessed January 20, 2012.
- 4.Larson HJ, Heymann DL. Public health response to influenza A (H1N1) as an opportunity to build public trust. JAMA. 2010;303:271–272 [DOI] [PubMed] [Google Scholar]
- 5.Neustadt RE, Fineberg HV. The Swine Flu Affair: Decision-Making on a Slippery Disease. Washington, DC: National Academies Press; 1978 [PubMed] [Google Scholar]
- 6.Mydske PK, Peters I, The Transformation of the European Nation State. Berlin, Germany: BWV Verlag; 2006 [Google Scholar]
- 7.Möllering G. The nature of trust: from Georg Simmel to a theory of expectation, interpretation and suspension. Sociology. 2001;35(2):403–420 [Google Scholar]
- 8.Baier A. Trust and antitrust. Ethics. 1986;96(2):231–260 [Google Scholar]
- 9.Beck U. Risk Society: Towards a New Modernity. London, England: Sage Publications; 1992 [Google Scholar]
- 10.Giddens A. Risk and responsibility. Mod Law Rev. 1999;62(1):1–10 [Google Scholar]
- 11.Beck U, Giddens A, Lash S. Reflexive Modernization: Politics, Tradition and Aesthetics in the Modern Social Order. Stanford, CA: Stanford University Press; 1994 [Google Scholar]
- 12.Lecourt D. La Santé Face au Principe de Précaution. Paris: Presses Universitaires de France; 2009 [Google Scholar]
- 13.Audétat M. La négociation des risques: expertise, acceptabilité et “démocratie technique.” : Burton-Jeangros C, Grosse C, November V, Face au risque. Chêne-Bourg, Suisse: Georg Éditeur, L’Équinoxe; 2007:91–101 [Google Scholar]
- 14.Entretien avec Claude Gilbert. In: Burton-Jeangros C, Grosse C, November V, Face au risque. Chêne-Bourg, Suisse: Georg Éditeur; 2007:15–28 [Google Scholar]
- 15.Kasperson JX, Kasperson RE, Pidgeon N, Slovic P. The social amplification of risk: assessing fifteen years of research and theory. : Pidgeon N, Kasperson RE, Slovic P, The Social Amplification of Risk. Cambridge, UK: Cambridge University Press; 2003:13–46 [Google Scholar]
- 16.Doshi P. Calibrated response to emerging infections. BMJ. 2009;339:b3471. [DOI] [PubMed] [Google Scholar]
- 17. CBC News – Health. H1N1 overplayed by media, public health: MDs. November 6, 2009. Available at: http://www.cbc.ca/news/health/story/2009/11/06/h1n1-media.html. Accessed January 20, 2012.
- 18.Tabuteau D. L’expert et la décision en santé publique. Les Tribunes de la santé. 2010;2(27):33–48 [Google Scholar]
- 19.Borraz O. Les politiques du risque. Paris, France: Presses de Sciences Politiques; 2008 [Google Scholar]
- 20.Resnik DB. Scientific research and the public trust. Sci Eng Ethics. 2011;17:399–409 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Williams-Jones B. Beyond a pejorative understanding of conflict of interest. Am J Bioeth. 2011;11:1–2 [DOI] [PubMed] [Google Scholar]
- 22.Beck U. Le danger nucléaire escamoté, par Ulrich Beck [translated in French by Gilles Berton]. Le Monde. August 6, 2008. Available at: http://www.lemonde.fr/idees/article/2008/08/06/le-danger-nucleaire-escamote-par-ulrich-beck_1080764_3232.html. Accessed January 20, 2012
- 23.Beck U. La politique dans la société du risque. La Découverte – Revue du MAUSS. 2001;1(17):376–392 [Google Scholar]
- 24.Flynn P. The handling of the H1N1 pandemic: more transparency needed. Parliamentary Assembly Council of Europe, Geneva, June 7, 2010. Available at: http://assembly.coe.int/Main.asp?link=/Documents/WorkingDocs/Doc10/EDOC12283.htm. Accessed January 20, 2012 [Google Scholar]
- 25.Usunier JC. Confiance et performance: Un essai de management comparé France/Allemagne. Paris, France: Vuibert-FNEGE; 2000 [Google Scholar]