Abstract
Effective science communication requires assembling scientists with knowledge relevant to decision makers, translating that knowledge into useful terms, establishing trusted two-way communication channels, evaluating the process, and refining it as needed. Communicating Science Effectively: A Research Agenda [National Research Council (2017)] surveys the scientific foundations for accomplishing these tasks, the research agenda for improving them, and the essential collaborative relations with decision makers and communication professionals. Recognizing the complexity of the science, the decisions, and the communication processes, the report calls for a systems approach. This perspective offers an approach to creating such systems by adapting scientific methods to the practical constraints of science communication. It considers staffing (are the right people involved?), internal collaboration (are they talking to one another?), and external collaboration (are they talking to other stakeholders?). It focuses on contexts where the goal of science communication is helping people to make autonomous choices rather than promoting specific behaviors (e.g., voter turnout, vaccination rates, energy consumption). The approach is illustrated with research in two domains: decisions about preventing sexual assault and responding to pandemic disease.
Keywords: science communication, evaluation, decision making, pandemics, sexual assault
Communicating science effectively can require an unnatural act: collaboration among experts from professional communities with different norms and practices. Those experts include scientists who know the subject matter and scientists who know how people communicate. They include practitioners who know how to create trusted two-way communication channels and practitioners who know how to send and receive content through them. They also include professionals who straddle these worlds, such as public health officials managing pandemics and climate scientists defending their work. Communicating Science Effectively: A Research Agenda (1) calls for a systems approach to recruiting and coordinating individuals with these skills and connecting them with those whom they might serve (ref. 1, pp. 8 and 84–86).
This perspective offers an approach to answering that call. It is grounded in the research presented in the report and at the National Academies’ Colloquia on the Science of Science Communication (2–4). It is designed to accommodate the limited resources of many organizations engaged in science communication. To that end, it proposes simplified versions of scientific methods that should degrade gracefully in practical applications.
The approach’s conceptual framework is grounded in Herbert Simon’s two general strategies (5, 6) for addressing complex problems. One is “bounded rationality,” looking for the best possible solution to a manageable subset of a problem, while deliberately ignoring some aspects. The second is “satisficing,” looking for an adequate solution while considering all aspects. Both strategies rely on heuristics to identify potentially useful strategies. With bounded rationality, heuristics indicate which aspects to ignore and how to optimize within those bounds. With satisficing, heuristics indicate how to generate and evaluate integrative solutions. To a first approximation, scientists pursue bounded rationality, whereas practitioners satisfice. However, this characterization, like Simon’s distinction (5, 6), is a heuristic one. It captures some general features while breaking down in ways that reveal the communities’ need for one another.
Scientists’ bounded rationality entails ignoring issues that they cannot treat systematically, hoping to reach strong conclusions within their discipline’s self-imposed constraints. Scientists from different disciplines struggle to collaborate, because they bound problems differently. Experimental researchers may be uncomfortable with unruly field observations. Field researchers may question the artificial conditions of experiments. Both may puzzle over computational models, while modelers may have little patience for the simplification of experiments or the qualitative evidence of field research. Scientists who study individuals may not know what to do with the context provided by those who study groups or cultures, who may shake their heads at being ignored. Each discipline owes its success to its tacit knowledge of how to work within its bounds. Those bounds can be so incommensurable that scientists from different disciplines struggle even to agree about how to disagree (7, 8). Nonetheless, as argued by Communicating Science Effectively: A Research Agenda (1), the success of science communication depends on collaboration across disciplines.
Practitioners’ satisficing entails paying attention to anything that might be relevant and accepting imperfect solutions. Practitioners of different persuasions struggle to collaborate, because they have different skills and norms. Those skills might include designing visual materials, crafting text, attracting media attention, convening stakeholders, and branding programs. Those norms might include how relevant they find social science evidence, whether they subscribe to a design philosophy, and what their professional code of ethics is. Their organizing design constructs maybe so different that they effectively speak different languages. Nonetheless, as argued in Communicating Science Effectively: A Research Agenda (1), serving diverse audiences and decisions requires practitioners with diverse expertise.
When these two worlds fail to connect, each is the worse for it. Scientists can overestimate how far their results generalize and offer practitioners unsupported advice or summaries. Practitioners can absorb a fragment of science and exaggerate its value. Scientists can unwittingly or naively let their values color their research or expositions. Practitioners can selectively pursue or accept convenient truths. Conversely, the two worlds support one another when they do connect, with practitioners helping scientists to identify the results that matter to their audiences and scientists helping practitioners to structure those interactions (9–11).
Ashby (12) introduced the term “requisite variety” to describe solutions that were as complex as the problems that they address. It is a heuristic construct for systems without easily characterized dimensions. However, it evokes the challenge facing a systems approach to science communication. The effort may fail unless the requisite sciences and practices are recruited and coordinated. As a result, the present approach focuses on identifying and connecting those needed elements. It recognizes that many organizations may lack not only many of the requisite skills but also “absorptive capacity,” the expertise needed to recruit potentially useful help (13). As a result, it concludes with a discussion of boundary organizations that can facilitate these matches (14–16).
Thus, this proposal offers practical ways for organizations with limited material resources and expertise to use the sciences of science communication. It assumes that a systems approach needs both bounded rationality and satisficing. For the former, the approach asks whether a system has the right set of bounded parts (disciplinary sciences). For the latter, it asks whether those parts are connected so as to produce satisficing solutions. It could be used to design systems or to audit them. Its methods are, in part, simplified versions of those used in scientific research. It illustrates the approach with two cases studies, showing variants of these decisions and consultations.
Communication Goals
Communication programs that seek to change observable behavior (e.g., voter turnout, energy consumption, immunization rates) might be judged by their outcomes. However, as Communicating Science Effectively: A Research Agenda (1) notes, sometimes outcomes are hard to observe (e.g., changes in values, aesthetics, feelings of self-efficacy). Moreover, sometimes the goal is not to effect specific changes but to empower people to make better informed choices. In such cases, successful communications might lead to different behaviors for people with different values or circumstances.
In principle, a science communication’s success might be evaluated by whether it produced choices closer to the ones that fully informed decision makers would make. Satisfying that condition is the goal (and ethical commitment) of libertarian paternalist interventions, which try to manipulate individuals’ “choice architecture” toward such choices (17). Thus, employees are nudged to place their retirement savings in stocks only when it has been determined that the expected financial returns outweigh the psychological costs of experiencing market corrections and the economic risks of being in one when funds are needed. Organ donation is made the default only for individuals whose survivors will accept that choice without having had a family consultation (18). Social norms are invoked for health behaviors (e.g., exercise, vaccinations, diet) only when people have the resources to follow them and a safety net should things go wrong.
Analysis and Its Limits
In practice, however, such personalized decision analyses are rare outside of medicine, where they have been conducted for many conditions and treatments (19). So that these analyses can capture the full range of patient concerns, the field has invested heavily in validated quality-of-life measures, patient cost estimates, and utility assessments (20, 21). That research has guided the design of interactive aids, which allow patients to select information and examine its implications for their personal decisions (22). It has also guided policy analyses, evaluating treatment protocols (23, 24).
Such formal decision analyses can reveal issues that more casual analyses might miss. For example, they may find that a generally valued outcome (e.g., money, pain) does not matter in a specific decision, because it is similar for all options (25). Formal analyses can show how seemingly technical definitions (e.g., “risk”) affect choices (26). They can frame ethical issues, such as whose preferences should shape policies—those who have a condition (e.g., paralysis) or the general public, which supports the health care system (27). They can ask when people lack the cognitive competence or emotional strength for making decisions (28). However, formal analyses require expressing all outcomes in numeric terms (e.g., costs, risks) in order to compute expected outcomes. As a result, they privilege outcomes that are readily quantified. They also require skills that few organizations have or can afford.
A more feasible aspiration is to adopt the logic of analysis but not its mechanics. That is, define the terms of a decision precisely enough that a technically adept analyst could “run the numbers” were the data available, but not require that to happen. Creating an analytical model that is “computable,” in this sense, demands clear thinking but no calculations. As a result, anyone should be able to create and understand one (29). Such qualitative formal models can serve parties with different needs. Scientists can see where their boundedly rational evidence fits into the big picture. Practitioners can look for satisficing solutions, addressing the overall problem. Decision makers can readily check that their concerns have been addressed. Having clearly defined variables and relationships helps ensure that the parties are talking about the same things.
One example of such a model is the benefit–risk framework of the Food and Drug Administration (FDA) (30, 31), created to improve communication among the parties involved in evaluating pharmaceuticals and biologics. Those parties include FDA technical reviewers and regulators, industry researchers and managers, patients, and advocates. The framework has rows for the five inputs to FDA’s regulatory decisions (medical condition, existing treatments, expected benefits, expected risks, and how risks will be managed if the product is approved). The framework has columns for FDA’s two bases for evaluating those inputs (science and law) and a narrative summary. Cell entries can be words or numbers and are meant to capture disagreements and uncertainties, reflecting FDA’s philosophy of having analysis inform rather than replace judgment (32).
Similarly spirited qualitative formal models are central to implementing the present proposal.
A Theory of Change for Science Communication
Social programs often reflect a theory of change (33, 34), identifying the elements deemed necessary for success. Although each element might be the subject of boundedly rational theories, a theory of change is not a scientific theory. Rather, it is a satisficing proposal, expressing an integrative vision of how a complex process works. The present proposal is a theory of change for science communication. It asks three questions.
Staffing: Are the right people involved?
Internal consultation: Are they talking effectively with one another?
External consultation: Are they talking effectively with other stakeholders?
The next section describes one complex decision of the sort that effective science communication could inform. Subsequent sections offer ways to answer these three questions. Although the questions are addressed in the order above, the process itself could start anywhere and is inherently iterative. External stakeholders could ask for help, triggering internal consultation that reveals missing skills. Staff could perceive a need, initiative a campaign, and, then, find themselves welcomed, rejected, or redirected (1, 9, 35).
One Complex Decision
In a project that was pivotal to my own thinking, Lita Furby, Marcia Morgan, and I sought to aid decisions related to sexual assault by better communicating relevant scientific evidence (36). The prompt for our work was observing the confident, universal, and contradictory advice offered to women regarding whether to resist physically when attacked. After reviewing the (limited) evidence on the efficacy of self-defense measures, we concluded that, if they knew research, women might differ in the strategies that they chose. As a result, we needed to understand those differences. As befits any communication project, we began by listening. Here, that entailed semistructured interviews with diverse groups of women, men, and experts, eliciting their perspectives on both personal decisions (e.g., how to respond to an assault) and societal ones (e.g., how to make assaults less likely).
These interviews revealed a rich decision space, with many possible options, outcomes, and uncertainties (37, 38). To structure that space, we created categories of options and outcomes, seeking a level of granularity that would be useful to decision makers. Within that structure, we summarized available research on the effects of the options on the outcomes. Where the evidence was limited, as was usually the case, those limits were part of the story. When uncertainty is great, advice in unproven. Unless those limits are acknowledged, if things go badly, then decision makers may bear the insult of blame and regret in addition to the injury that occurred. Whatever they did, some “expert” had advised otherwise.
Given that uncertainty and the diversity of decision makers’ circumstances, our project had no theory of change for encouraging specific behaviors among those concerned with sexual assault. However, we did have a theory of change for ourselves, structuring our efforts to inform those decisions. That engagement and subsequent ones have led to the theory of change that guides this proposal for implementing the recommendation of a systems approach in Communicating Science Effectively: A Research Agenda (1).
Staffing: Are the Right People Involved?
Effective science communications must contain the information that recipients need in places that they can access and in a form that they can use. Achieving those goals requires four kinds of expertise (39).
Subject Matter Experts.
The core of any science communication is authoritative summaries of evidence relevant to decision makers’ needs. That evidence may come from many sciences. For example, sexual assault decisions might be informed by results from psychology, sociology, criminology, and economics. Unless staff have expertise in an issue or the capacity to absorb it (13), they will have to ignore or guess at it (26, 40).
Decision Scientists.
Eager to share their knowledge, subject matter experts may drown decision makers in facts that it would be nice to know. Decision scientists can identify the facts that decision makers need to know. They can also characterize evidence quality, estimate decision sensitivity, and reveal hidden assumptions (16, 17, 30, 31). For example, a sensitivity analysis of the decision faced by a young academic might conclude that “there is no sure way to prevent a powerful figure in your field from destroying your career.” A decision analysis of sexual assault advice might conclude that “it ignores restrictions on your freedom.”
Social, Behavioral, and Communication Scientists.
Knowing what to say does not guarantee knowing how to say it. Coupled with the normal human tendency to overestimate mutual understanding (41), scientists’ intuitions can be a poor guide to effective communication. Indeed, scientists’ success in the classroom may produce unrealistic expectations for being able to communicate with general audiences, with no examinations providing feedback on their success. Communicating Science Effectively: A Research Agenda (1) identifies the diverse expertise available for understanding audiences, crafting communications, and evaluating success (2–4, 35, 42, 43).
Program Designers and Managers.
Finally, science communication needs practitioners to create channels, recruit stakeholders, disseminate messages, mind legal constraints, anticipate cultural sensitivities, and collect feedback. Practitioners are also needed to manage the process, secure the relevant experts, and get them talking with one another and external stakeholders. Without a firm hand, normal group dynamics can lead to recruiting, rewarding, and retaining people with similar backgrounds and blind spots, who are overly comfortable talking to one another. A firm hand is also needed to let everyone offer opinions, while leaving ultimate authority to those most expert in a topic. That will keep subject matter experts from editing for style rather than accuracy, social scientists from garbling the facts when trying to clarify them, and practitioners from spinning messages when the facts are needed.
Internal Consultation: Are They Talking Effectively with One Another?
Experts must combine their knowledge to realize its value. That means jointly examining issues, connections between issues, and the assumptions underlying those interpretations. Fig. 1 illustrates a decision science tool for structuring such consultations (44). A computable (i.e., nonnumeric) version of an influence diagram (45), it depicts actions as rectangles and uncertain variables as ovals (gray if valued outcomes; white if intermediates). It was created to structure discussions at a meeting about the then-pending threat of H5N1 (avian flu). It has places for the science that could inform decisions faced by health officials, employers, parents, suppliers, and others, each wondering if and how to prepare for a possible pandemic. What should they expect regarding quarantine, home schooling, rationing, hospital closures, telecommuting, drug shortages, and social solidarity (or fracture)?
Fig. 1.
Risk model for pharmacological interventions for a pandemic. Ovals indicate uncertain variables, which need to be predicted. Rectangles indicate actions, which need to be planned and implemented. Reprinted by permission from ref. 29, Springer Nature: Journal of Risk and Uncertainty, copyright 2006.
Translating science into such a decision-relevant form requires consultation on three levels. One is summarizing the science at each node (e.g., what quantities of antivirals will be available) and link (e.g., how effectively will vaccines reduce morbidity). The second is estimating interactions (e.g., how will morbidity and mortality combine to affect social costs). The third is identifying contextual factors (sometimes called “index variables”) that affect many model elements (e.g., is the society developed or developing) (40).
Quantifying such models demands technical training and material resources. However, sketching a model well enough to facilitate consultations only requires clear thinking and candid conversation. To that end, before the H5N1 meeting, participants completed a survey eliciting their beliefs about the issues in Fig. 1 and several related models (46). The models were instantiated with scenarios to make their abstractions concrete. The meeting and survey were anonymous so that participants could work the problem without pressure to represent the firms or agencies in which many held senior positions. They were drawn from public health, technology, and mass media; hence, they could offer their views on the needs and responses of publics that they might support in a pandemic, but typically know in more benign circumstances.
These scientists and practitioners were brought together because interpreting such evidence requires actual conversation. It is not enough for members of one field to read the publications of another. Publications reflect their authors’ perspectives and not those of their entire field. They omit assumptions that go without saying when scientists or practitioners write for colleagues or clients. For scientists, those assumptions include bounds on their discipline’s rationality. For practitioners, they include accepted limits to satisficing solutions.
One practical method for describing these internal consultations is social network analysis, created by asking members of a communication team to describe their relationships (47, 48). A successful team will have the requisite connections among those associated with each link and node in the relevant models. Fig. 2 shows such relationships as revealed in self-reports of “close and collegial relations” in a study of six interdisciplinary research centers (49). For this center, the study concluded that “most…interactions are concentrated in a small core of researchers…Disciplines from the physical sciences dominate the core…, environmental scientists/social scientists dominate the periphery” (ref. 49, p. 57). A theory of change for a communication team would specify which members must talk with one another, with a diagram measuring its success. Of course, even parties who view their relationships as close and collegial may not identify and correct all misunderstandings.
Fig. 2.
Network diagram of self-reported close and collegial relationships among members of an interdisciplinary research program supported by the National Academy of Sciences. Reprinted with permission from ref. 49.
External Consultation: Are They Talking Effectively with Other Stakeholders?
Scientific communicators seek to be trusted partners of people making decisions where science matters. In this issue of PNAS, those include decisions about gene drives (50), autonomous vehicles (51), employment (52), and energy (53). Earning that trust means providing the science most relevant to decision makers’ valued outcomes in comprehensible form and accessible places. Existing research is the natural source of initial guidance for accomplishing those tasks (1–4, 41–43, 54).
However good the research, communicators must still consult with their external stakeholders. It is unrealistic to expect them to know how people very different from themselves view their world or the communicators (8, 41, 55). Even if those consultations only affirm what the research says, they are important as “speech acts.” They show respect for the stakeholders as individuals worthy of knowing and hearing. They need to occur throughout the process to maintain the human contact and so that communicators know what is on stakeholders’ minds and stakeholders know what communicators are doing (1, 9, 35, 39). However, making that happen can be challenging, especially with diverse, dispersed, and disinterested publics.
Our sexual assault project adopted one imperfect approach (36–38). It asked nonrepresentative samples of individuals recruited from diverse groups to complete open-ended surveys, allowing them to choose the issues and describe them in their own terms. These surveys were followed by confirmatory structured ones with similarly sampled individuals. Although we engaged diverse individuals who revealed a wide range of views, the consultation was indirect. Our avian flu project involved two days of intense direct consultation, building on a preparatory survey. However, it was with a highly select group experienced in sampling public opinion but not authorized to represent it.
The medical world, with its traditions of informed consent and shared decision making (22–24, 28), offers examples that might be adapted to other settings. For example, to secure patient input to its benefit–risk framework (30, 31), FDA created the Voice of the Patient Initiative (56), with daylong exchanges on critical issues (e.g., chronic fatigue syndrome, sickle cell disease)—a model that some patient groups have adopted. The desire to include patient experiences in clinical trials led to including self-reported quality-of-life measures as outcomes. However, the result was a proliferation of measures with varying content and quality that undermined the research effort. In response, NIH created an inventory of psychometrically validated measures, freely available online, with adaptive testing for efficient administration (20, 21). Recognizing the importance of evaluating communications, Communicating Risks and Benefits: An Evidence-Based User’s Guide published by FDA (43) ends each chapter with guidance on evaluation for no resources, modest resources, and resources commensurate with the personal, economic, and political stakes riding on good communication. It simplest method is the think-aloud protocol, asking people to say whatever comes into their minds as they read draft materials (57, 58).
Any communication that goes to a broader audience constitutes an indirect consultation, as recipients assess their relationships with its source, based on what they infer about its competence and trustworthiness. Fig. 3 shows section headings from an attempt to create a relationship that neither abandons recipients nor provides unsupportable advice. It was written when editors of a journal (American Psychologist) refused to publish a review that was critical of much customary advice without providing an alternative. It was my hope that it would be seen as respecting and empowering recipients.
Fig. 3.
Advice for reducing the risk of sexual assault (36).
Conclusion
Science communications succeed when recipients make better decisions. Applying that standard means evaluating the optimality of choices made with and without the communications. With complex decisions and diverse decision makers, such evaluation is typically infeasible. The alternative is asking how well the communication process has followed a theory of change. The present proposal offers a theory of change embracing a systems approach as advocated by Communicating Science Effectively: A Research Agenda (1). It entails staffing with the right people, internal consultation among them, and external consultation with those whom they seek to serve. It embraces both the bounded rationality of disciplinary scientists and the satisficing of practitioners.
Its proposed procedures rely on simplified versions of scientific methods adapted for use by organizations with limited resources. They include think-aloud protocols, network analyses, and qualitative formal analyses, precise enough to allow quantitative analysis were data requirements met, but not requiring it. The proposal assumes that anyone can create, critique, and discuss a decision space with options and valued outcomes; an influence diagram with the factors determining those outcomes (Fig. 1); and a social network depicting work relations (Fig. 2). Adopting science-like methods should also facilitate commissioning analyses from professionals when circumstances warrant and resources allow (59).
These methods all assume a world where, in the words of Communicating Science Effectively: A Research Agenda (1), “researchers and practitioners…form partnerships” and “researchers in diverse disciplines…work together” (ref. 1, p. 9). There are precedents for creating boundary organizations hospitable to such partnerships (14–16). The American Soldier project during World War II brought together social scientists and practitioners (60). The Medical Research Council Applied Psychology Unit did the same in the United Kingdom (61), as have the Department of Homeland Security Centers of Excellence (https://www.dhs.gov/science-and-technology/centers-excellence). The latest US National Climate Assessment was developed in consultation with stakeholders (e.g., in agriculture and transportation) (https://nca2014.globalchange.gov). The National Science Foundation has supported multidisciplinary centers in many domains, with strong public outreach requirements.
A feature common to these ventures is a crisis that united scientists and practitioners against a common adversary, sometimes identifiable (e.g., the Axis Powers) and sometimes diffuse (e.g., threats to the environment). The urgent tone of Communicating Science Effectively: A Research Agenda (1) implies a crisis in communicating science that is deep enough to impel partnerships. That crisis threatens not only the usefulness of scientific results and society’s return on investment in them, but also faith in the scientific enterprise and its place in public discourse (11, 55, 62). Scientists who share that sense of urgency may change the reward structure in their disciplines, treating science communication as a professional responsibility and valuing colleagues who cultivate the commons of public goodwill on which science and society depend. Those scientists will promote the most relevant science, even it is not their own. They will allow evidence from the sciences of communication to inform, and perhaps even refute, their intuitions regarding what to say and how to say it, thereby embracing the vision of Communicating Science Effectively: A Research Agenda (1).
Footnotes
The author declares no conflict of interest.
This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “The Science of Science Communication III” held November 16–17, 2017, at the National Academy of Sciences in Washington, DC. The complete program and audio files of most presentations are available on the NAS Web site at www.nasonline.org/Science_Communication_III.
This article is a PNAS Direct Submission. D.A.S. is a guest editor invited by the Editorial Board.
References
- 1.National Research Council . Communicating Science Effectively: A Research Agenda. National Academy Press; Washington, DC: 2017. [PubMed] [Google Scholar]
- 2.Fischhoff B, Scheufele D. The science of science communication. Proc Natl Acad Sci USA. 2013;110(Suppl 3):14033–14039. doi: 10.1073/pnas.1213273110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Fischhoff B, Scheufele D. The science of science communication II. Proc Natl Acad Sci USA. 2014;111(Suppl 4):13583–13584. doi: 10.1073/pnas.1414635111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Fischhoff B, Scheufele D. 2018 The science of science communication III. Proc Natl Acad Sci USA, in press. [Google Scholar]
- 5.Simon HA. Administrative Behavior. Macmillan; New York: 1947. [Google Scholar]
- 6.Simon HA. Rational choice and the structure of the environment. Psychol Rev. 1956;63:129–138. doi: 10.1037/h0042769. [DOI] [PubMed] [Google Scholar]
- 7.Kahneman D, Klein G. Conditions for intuitive expertise: A failure to disagree. Am Psychol. 2009;64:515–526. doi: 10.1037/a0016755. [DOI] [PubMed] [Google Scholar]
- 8.Medin D, Ojalehto B, Marin A, Bang M. Systems of (non-)diversity. Nat Hum Behav. 2017;1:0088. [Google Scholar]
- 9.Dietz T. Bringing values and deliberation to science communication. Proc Natl Acad Sci USA. 2013;110:14081–14087. doi: 10.1073/pnas.1212740110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Scheufele DA. Communicating science in social settings. Proc Natl Acad Sci USA. 2013;110:14040–14047. doi: 10.1073/pnas.1213275110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Scheufele DA. Science communication as political communication. Proc Natl Acad Sci USA. 2014;111:13585–13592. doi: 10.1073/pnas.1317516111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ashby WR. An Introduction to Cybernetics. Chapman & Hall; London: 1956. [Google Scholar]
- 13.Cohen WA, Levinthal DA. Absorptive capacity: A new perspective on science and innovation. Adm Sci Q. 1990;35:128–152. [Google Scholar]
- 14.Bidwell D, Dietz T, Scavia D. Fostering knowledge networks for climate adaptation. Nat Clim Change. 2013;3:610–611. [Google Scholar]
- 15.Guston DH. Boundary organizations in environmental policy and science: An introduction. Sci Technol Human Values. 2001;26:399–408. [Google Scholar]
- 16.Parker JN, Crona BI. On being all things to all people: Boundary organizations and the contemporary research university. Soc Stud Sci. 2012;42:262–289. [Google Scholar]
- 17.Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. Yale Univ Press; New Haven, CT: 2009. [Google Scholar]
- 18.McCartney M. Margaret McCartney: When organ donation isn’t a donation. BMJ. 2017;356:j1028. doi: 10.1136/bmj.j1028. [DOI] [PubMed] [Google Scholar]
- 19.Schwartz A, Bergus G. Medical Decision Making. Cambridge Univ Press; New York: 2008. [Google Scholar]
- 20.Cella D, et al. PROMIS Cooperative Group The patient-reported outcomes measurement information system (PROMIS): Progress of an NIH Roadmap cooperative group during its first two years. Med Care. 2007;45(Suppl 1):S3–S11. doi: 10.1097/01.mlr.0000258615.42478.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Health Measures 2018 Comprehensive measurement systems. Available at www.healthmeasures.net/index.php/. Accessed October 24, 2018.
- 22.The Ottawa Hospital Research Institute 2017 Patient decision aids. Available at https://decisionaid.ohri.ca/. Accessed October 24, 2018.
- 23.Basu A, Meltzer D. Value of information on preference heterogeneity and individualized care. Med Decis Making. 2007;27:112–127. doi: 10.1177/0272989X06297393. [DOI] [PubMed] [Google Scholar]
- 24.Dewitt B, Davis A, Fischhoff B, Hanmer J. An approach to reconciling competing ethical principles in aggregating heterogeneous health preferences. Med Decis Making. 2017;37:647–656. doi: 10.1177/0272989X17696999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.von Winterfeldt D. Bridging the gap between science and decision making. Proc Natl Acad Sci USA. 2013;110:14055–14061. doi: 10.1073/pnas.1213532110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Fischhoff B. The realities of risk-cost-benefit analysis. Science. 2015;350:aaa6516. doi: 10.1126/science.aaa6516. [DOI] [PubMed] [Google Scholar]
- 27.Versteegh MM, Brouwer WBF. Patient and general public preferences for health states: A call to reconsider current guidelines. Soc Sci Med. 2016;165:66–74. doi: 10.1016/j.socscimed.2016.07.043. [DOI] [PubMed] [Google Scholar]
- 28.Barnato AE. Challenges in understanding and respecting patients’ preferences. Health Aff (Millwood) 2017;36:1252–1257. doi: 10.1377/hlthaff.2017.0177. [DOI] [PubMed] [Google Scholar]
- 29.Fischhoff B, Bruine de Bruin W, Guvenc U, Caruso D, Brilliant L. Analyzing disaster risks and plans: An avian flu example. J Risk Uncertain. 2006;33:131–149. [Google Scholar]
- 30.Food and Drug Administration . Structured Approach to Benefit-Risk Assessment for Drug Regulatory Decision Making. Draft PDUFA V Implementation Plan. FY2013–2017. Food and Drug Administration; Washington, DC: 2013. [Google Scholar]
- 31.Food and Drug Administration . Benefit-Risk Assessment in Drug Regulatory Decision Making. Draft PDUFA VI Implementation Plan. FY2018–2022. Food and Drug Administration; Washington, DC: 2018. [Google Scholar]
- 32.Fischhoff B. Breaking ground for psychological science: The U.S. Food and Drug Administration. Am Psychol. 2017;72:118–125. doi: 10.1037/a0040438. [DOI] [PubMed] [Google Scholar]
- 33.Community Tool Box 2018 Learn a skill: Table of contents. Available at https://ctb.ku.edu/en/table-of-contents. Accessed October 24, 2018.
- 34.Taplin DH, Clark H. 2012 Theory of change basics (ActKnowledge, New York). Available at www.theoryofchange.org/wp-content/uploads/toco_library/pdf/ToCBasics.pdf. Accessed June 8, 2018.
- 35.Dietz T, Stern P, editors. Public Participation in Environmental Assessment and Decision Making. National Academy Press; Washington, DC: 2008. [Google Scholar]
- 36.Fischhoff B. Giving advice. Decision theory perspectives on sexual assault. Am Psychol. 1992;47:577–588. doi: 10.1037//0003-066x.47.4.577. [DOI] [PubMed] [Google Scholar]
- 37.Fischhoff B, Furby L, Morgan M. Rape prevention: A typology and list of strategies. J Interpers Violence. 1987;2:292–308. [Google Scholar]
- 38.Furby L, Fischhoff B, Morgan M. Rape prevention and self-defense: At what price? Womens Stud Int Forum. 1991;14:49–62. [Google Scholar]
- 39.Fischhoff B. The sciences of science communication. Proc Natl Acad Sci USA. 2013;110:14033–14039. doi: 10.1073/pnas.1213273110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Morgan MG. Theory and Practice in Policy Analysis. Cambridge Univ Press; New York: 2017. [Google Scholar]
- 41.Nickerson RA. How we know—And sometimes misjudge—What others know: Imputing our own knowledge to others. Psychol Bull. 1999;125:737–759. [Google Scholar]
- 42.Breakwell GM. The Psychology of Risk. 2nd Ed Cambridge Univ Press; Cambridge, UK: 2014. [Google Scholar]
- 43.Fischhoff B, Brewer N, Downs JS, editors. Communicating Risks and Benefits: An Evidence-Based User’s Guide. Food and Drug Administration; Washington DC: 2011. [Google Scholar]
- 44.Bruine de Bruin WB, Güvenç U, Fischhoff B, Armstrong CM, Caruso D. Communicating about xenotransplantation: Models and scenarios. Risk Anal. 2009;29:1105–1115. doi: 10.1111/j.1539-6924.2009.01241.x. [DOI] [PubMed] [Google Scholar]
- 45.Burns WJ, Clemen RT. Covariance structure models and influence diagrams. Manage Sci. 1993;39:816–834. [Google Scholar]
- 46.Bruine De Bruin W, Fischhoff B, Brilliant L, Caruso D. Expert judgments of pandemic influenza risks. Glob Public Health. 2006;1:178–193. doi: 10.1080/17441690600673940. [DOI] [PubMed] [Google Scholar]
- 47.Carley K, Prietula MJ. Computational Organization Theory. Lawrence Erlbaum; Hillsdale, NJ: 1994. [Google Scholar]
- 48.Moreno JL. Sociometry, Experimental Method and the Science of Society. Beacon House; Boston: 1951. [Google Scholar]
- 49.Hybrid Vigor Institute 2003 A multi-method analysis of the social and technical conditions for interdisciplinary collaboration (Hybrid Vigor Institute, San Francisco). Available at hybridvigor.net/interdis/pubs/hv_pub_interdis-2003.09.29.pdf. Accessed May 10, 2018.
- 50.Brossard D, Belluck P, Gould F, Wirz CD. 2018. Promises and perils of gene drives: Navigating the communication of complex, post-normal science. Proc Natl Acad Sci USA in press. [DOI] [PMC free article] [PubMed]
- 51.Hancock PA, Nourbakhsh I, Stewart J. On the future of transportation in an era of automated and autonomous vehicles. Proc Natl Acad Sci USA. 2018;116:7684–7691. doi: 10.1073/pnas.1805770115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Davis GF. How to communicate large-scale social challenges: The problem of the disappearing American corporation. Proc Natl Acad Sci USA. 2018;116:7698–7702. doi: 10.1073/pnas.1805867115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Bruine de Bruin W, Morgan MG. Reflections on an interdisciplinary collaboration to inform public understanding of climate change, mitigation, and impacts. Proc Natl Acad Sci USA. 2018;116:7676–7683. doi: 10.1073/pnas.1803726115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Kahneman D. Thinking, Fast and Slow. Farrar Giroux and Strauss; New York: 2011. [Google Scholar]
- 55.Fiske ST, Dupree C. Gaining trust as well as respect in communicating to motivated audiences about science topics. Proc Natl Acad Sci USA. 2014;111:13593–13597. doi: 10.1073/pnas.1317505111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Food and Drug Administration 2018 The voice of the patient: A series of reports from FDA’s Patient-Focused Drug Development initiative. Available at https://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm368342.htm. Accessed October 24, 2018.
- 57.Ericsson A, Simon HA. Verbal Reports as Data. MIT Press; Cambridge, MA: 1990. [Google Scholar]
- 58.Merton RK. The focussed interview and the focus group. Public Opin Q. 1987;51:550–566. [Google Scholar]
- 59.National Research Council . Intelligence Analysis for Tomorrow. National Academy Press; Washington, DC: 2011. [Google Scholar]
- 60.Lazarsfeld PF. The American Soldier: An expository review. Public Opin Q. 1949;13:377–404. [Google Scholar]
- 61.Reynolds LA, Tansey EM, editors. The MRC Applied Psychology Unit. Wellcome Witnesses to Twentieth Century Medicine. Vol 16 Wellcome Trust Centre for the History of Medicine at UCL; London: 2003. [Google Scholar]
- 62.Lupia A. Communicating science in politicized environments. Proc Natl Acad Sci USA. 2013;110:14048–14054. doi: 10.1073/pnas.1212726110. [DOI] [PMC free article] [PubMed] [Google Scholar]