Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2013 Aug 13;110(Suppl 3):14062–14068. doi: 10.1073/pnas.1212729110

Assessing what to address in science communication

Wändi Bruine de Bruin a,b,1, Ann Bostrom c
PMCID: PMC3752171  PMID: 23942122

Abstract

As members of a democratic society, individuals face complex decisions about whether to support climate change mitigation, vaccinations, genetically modified food, nanotechnology, geoengineering, and so on. To inform people’s decisions and public debate, scientific experts at government agencies, nongovernmental organizations, and other organizations aim to provide understandable and scientifically accurate communication materials. Such communications aim to improve people’s understanding of the decision-relevant issues, and if needed, promote behavior change. Unfortunately, existing communications sometimes fail when scientific experts lack information about what people need to know to make more informed decisions or what wording people use to describe relevant concepts. We provide an introduction for scientific experts about how to use mental models research with intended audience members to inform their communication efforts. Specifically, we describe how to conduct interviews to characterize people’s decision-relevant beliefs or mental models of the topic under consideration, identify gaps and misconceptions in their knowledge, and reveal their preferred wording. We also describe methods for designing follow-up surveys with larger samples to examine the prevalence of beliefs as well as the relationships of beliefs with behaviors. Finally, we discuss how findings from these interviews and surveys can be used to design communications that effectively address gaps and misconceptions in people’s mental models in wording that they understand. We present applications to different scientific domains, showing that this approach leads to communications that improve recipients’ understanding and ability to make informed decisions.


In democracies, citizens face complex decisions about whether to accept the potential hazards associated with proposed policies to reap their potential benefits. Such policies may involve climate change mitigation, vaccinations, genetically modified food, nanotechnology, geoengineering, and so on. Making informed decisions about these complex topics requires an understanding of the relevant science, especially to the extent that it pertains to the potential consequences of the available courses of action. This paper targets scientific experts at government agencies, nongovernmental organizations, and others who are charged with developing communication materials, such as brochures or websites, for members of the general public. The goal of their communications may vary across specific contexts, including informing individuals’ decisions, informing public debates, or enabling behavior change. Indeed, effective communications are crucial for helping individuals to make informed decisions and promoting constructive public debate about new technologies. Effective communications are also needed as part of policies that target behavior change (for example, by helping people to prepare for disasters, encouraging residential electricity savings and sustainable consumer choices, and implementing healthy behavior). Here, we discuss strategies for identifying information that people need and want to improve their decisions.

Barriers to Effective Communication

Scientific experts may face several barriers to developing effective communication materials for stakeholders and members of the public. Miscommunications may occur when experts use themselves as model audience members and present the information that they themselves find most important and interesting. This strategy works well in most communications between colleagues with shared expertise or when discussing everyday topics with nonexperts (1, 2). However, like many experts, scientific experts may not remember what it was like to be a novice in their field and therefore, may have inaccurate intuitions about people’s informational needs. After many years of deliberate training and practice, specialized knowledge becomes intuitive, and technical terminology becomes central to communicating with other experts (3).

As a result, experts in different fields tend to lack good intuition about what nonexperts believe and what they still need to know to make more informed decisions (4, 5). For example, experts’ communications about climate change mitigation seem to focus on convincing those individuals who doubt the reality of climate change or find it of relatively little importance (6, 7). Sadly, the informational needs of those individuals who are motivated to curb climate change are often left unmet. Experts tend to provide laundry lists of activities for mitigating climate change, making it difficult even for proenvironmental recipients to recognize that energy efficiency (e.g., replacing conventional light bulbs with compact fluorescent lamps or light-emitting diodes) is more effective than energy conservation (e.g., turning off lights) (8, 9). Because the presentation of large numbers of options can lead to choice overload and decision avoidance, it may be better to concentrate communications on just the most effective activities (10).

Additionally, experts may present needlessly complex information and use jargon that is unfamiliar to or interpreted differently by nonexperts. For example, engineers talk about a 100-y flood rather than a flood that has a 1% chance of happening each year without realizing that people interpret the former as more likely and more predictable than the latter (11, 12). Moreover, both descriptions fail to recognize that people are more concerned about flood levels than flood frequency and prefer concrete predictions such as over 1 ft of water in the house (12). One approach to addressing such barriers is the mental models approach for developing more effective communications, which we describe below.

Mental Models Approach

Rather than relying on experts’ intuitions about what people need to know, communication materials should be based on evidence about the relevant beliefs that audience members already have and what they are still missing (13, 14). Indeed, research in science education (15, 16) and health communications (17) as well as cognitive anthropology and psychology (18, 19) suggests that people interpret new information in light of their existing beliefs, also referred to as their mental models. The mental models approach to developing communication materials, therefore, involves both the relevant experts and the intended audience members. The four steps of the mental models approach are presented in Fig. 1. The approach begins by identifying what people should know to make informed decisions about the topic under consideration based on a scientific literature review and recommendations from expert panels. For example, during the 2005 threat of an H5N1 influenza outbreak, infectious disease experts indicated that vaccines would likely not be available in time, and therefore, they recommended nonpharmaceutical interventions, such as increased hand washing or use of face masks (20, 21). However, experts had no evidence to indicate how well people would be able to implement these recommendations during a pandemic influenza outbreak (20, 21) or what information people would need to overcome any potential barriers to effective implementation (22). These findings highlight the need for social science research with intended audience members (20, 21).

Fig. 1.

Fig. 1.

The four steps of the mental models approach to developing communications.

Hence, the second step of the mental models approach involves interviews and survey methods to elicit people’s mental models (14). Interviews can provide an initial characterization of people’s beliefs and decisions as well as the wording that they prefer to describe the relevant issues. Follow-up surveys with larger samples are recommended for examining the prevalence of interviewees’ beliefs and drivers of decisions. Both the expert decision model of step 1 and the lay decision model of step 2 can be formally represented in an influence diagram (14, 23).

In the third step, a systematic comparison of the expert and lay decision models reveals differences in how experts and lay people think about the target risk decisions and reflects the decision-relevant information that is missing from people’s mental models (14, 2427). To design effective communications, scientific educators should address these differences in focus, including gaps and misconceptions, with involvement from experts to ensure accuracy and intended audience members to improve ease of understanding. In the fourth step, evaluation studies test whether the resulting communication is effective in terms of facilitating recipients’ understanding and informed decisions (28).

This paper provides an introduction to the second step of the mental models approach, because scientific experts often lack training in social science methods needed to assess what people need (or want) to know from science communication. By comparison, scientific experts tend to be familiar with literature reviews and expert panels that are needed for the first step. Information about how to design communications and how to conduct randomized controlled trials is available elsewhere (29). However, although textbooks on social science methods discuss interviews and surveys, they do not explain how to use interviews and surveys to inform the design of communications. Guidelines to the mental models approach for developing communications have also not presented the detail provided here.

Next, we introduce established social science methods for conducting a combination of interviews and surveys with the specific goal of informing scientific experts’ communications with nonexpert audiences. We bring together information that has been dispersed in the literature across the social sciences. The content reflects our best judgment of what scientific experts need to know about designing interviews and surveys to inform their communications based on our experiences in working with experts in domains such as engineering, environmental science, economics, and public health.

Semistructured Interviews

Goals and Challenges.

When interviews are conducted to inform the design of communication materials, the goal is to characterize interviewees’ beliefs about the topic under consideration, including knowledge gaps and misconceptions in need of intervention as well as preferred wording (14). One of the main challenges is to ask questions without suggesting specific ideas or terminology. To this end, interviews typically start with open-ended questions that are followed by prompts like “can you tell me more about that?” to encourage additional discussion of any topics that are raised. After exhausting interviewees’ explanations, follow-up questions become more directive, aiming to systematically cover relevant issues (such as exposure to the risk, potential effects, and mitigation) or probe for lay definitions of common expert-preferred terms.

This structure allows for interviewees to express their mental models before they are altered by interviewers’ more directive questions. For example, our interviews on global climate change in the early 1990s opened by merely asking “tell us about climate change,” in response to which interviewees mentioned causes of climate change, such as the ozone hole, aerosol spray cans, and carbon dioxide, but rarely energy use (30). Thus, by starting with an open-ended question, we were able to learn that interviewees conflated stratospheric ozone depletion and global warming and were not focused on climate change mitigation through reduced energy use.

Another example that highlights the importance of starting interviews with general questions comes from our project about smart meters. Utility companies had been installing smart meters in people’s homes with the goal of tracking their electricity use in small time intervals and charging more for electricity used during peak hours. Interviewers began by asking, “What have you heard about smart meters?” Interviewees responded favorably but they had the unrealistic expectation that smart meters would provide useful appliance-specific feedback and suggestions for saving electricity (31). If interviewees had received more specific questions that revealed the actual purpose of smart meters early on, we would have changed their knowledge and failed to reveal these misconceptions about smart meters.

Another challenge for interviewers is to be active listeners by using counseling interview methods used by social workers and ethnographic interview methods used by anthropologists (32, 33). These methods recommend maintaining an encouraging, nonjudgmental tone, even when some of what is shared may seem to the interviewer to be incorrect, ill-advised, or morally wrong. Whenever interviewees seem to be searching for words, interviewers should wait patiently and resist helping by filling in the blanks. An associated challenge is the need to refrain from educating interviewees about the topic under consideration. In our experience, this challenge is especially difficult for domain experts, perhaps because of their passion for their topic of expertise. Experienced interviewers who are not domain experts may be more skilled at waiting until the end of the interview before offering information, such as existing brochures or contact information for outreach organizations. For example, interviewers gave adolescents who completed our questions about HIV/AIDS the number for the National AIDS Hotline (34).

Providing interviewees with access to relevant information after they complete their interview follows guidelines posed by some Internal Review Boards (IRBs), which aim to protect the rights and wellbeing of research participants. All research that involves human participants needs to be approved by the IRB at the researchers’ institution. When conducting interviews, an important IRB requirement is to protect interviewees’ confidentiality by avoiding the use of names on recordings or transcripts. Other IRB requirements may pose challenges for research design (35). For example, some IRBs have insisted that participants should be told that they can skip questions with which they feel uncomfortable or discontinue the interview at any time while still receiving compensation. Although some of our collaborators have expressed concerns that such instructions may introduce noncompliance, we actually have found no such evidence in our many years of interviewing experience.

Common Interview Modes.

Most interviews are conducted in person or over the phone. There is little agreement about which mode is better. It has been suggested that phone interviews increase disclosures because of reduced anonymity concerns (36), the increased social distance introduced in phone interviews leads to decreased willingness to disclose sensitive information about, for example, sexual activity or drug use (37), and mode makes no difference (38). Because having experienced interviewers may be more important than the specific interview mode, researchers should choose the mode that meets their needs (39). A benefit of phone interviews is the reduced geographical boundary in recruiting, thus allowing us to interview Florida residents about hurricane risk perceptions from our offices in Pennsylvania (40). Telephone interviews also show increased interviewer adherence to the protocol, because they facilitate reading along without distractions from looking at the interviewee (39). However, the potential disadvantage of phone interviews is that interviewers miss visual cues indicating interviewee fatigue or confusion (39).

Focus groups represent another mode for interviewing individuals from populations of interest while allowing them to interact with each other. Focus groups are recommended for studying group decision processes when, for example, prioritizing strategies for reducing environmental risks (41). However, focus groups are not recommended for assessing individuals’ in-depth understanding of specific topics, because focus group participants tend to hold back beliefs that they do not share with the rest of the group (42). As a result, a comparison of focus groups and one-on-one interviews revealed systematic differences in what people shared about natural resource valuation (43).

Analyses.

After transcribed, interviews should be coded for content. Two independent coders need to judge whether each interviewee covered the key concepts that they need in their mental models to make informed decisions. These key concepts should have been identified through a scientific literature review and expert panels as part of the expert decision model (Fig. 1) and typically cover exposure to the risk, potential effects, and mitigation. Thus, the coding facilitates a systematic comparison of interview content with what experts believe people need to know to make informed decisions. If interviewees raise topics that are not part of the expert decision model, new codes may be added, especially if they refer to decision problems that experts had not considered. Additionally, scientific experts should read through the interviews and highlight possible misunderstandings. Hence, the final set of codes highlights what interviewees already know as well as which key concepts are missing or potentially misunderstood. For example, in interviews about childhood vaccination, we found that interviewees described immunity as provided to a vaccinated individual and omitted references to herd immunity as benefiting a community in which most members have been vaccinated (44).

Subsequently, two or more judges should be trained in applying the coding scheme to interviews that are representative but not part of the actual study taken from for example interviewers’ practice sessions (45). Their interrater agreement reflects the reliability of the coding, with 70% being relatively good for complex coding schemes (46, 47). Measures of reliability typically reflect interrater agreement after correcting for chance agreement (48, 49). If specific concepts are too similar for coders to distinguish, coding reliability may be improved by using a more general coding category that captures the combined concepts. After training, each of the actual interviews should be coded independently by two or more of the judges to assess the reliability of the coding exercise.

Sampling.

Because conducting interviews and coding their content are labor-intensive exercises, it is common practice to stop doing additional interviews as soon as no more new beliefs emerge—thus reaching what qualitative researchers call saturation (14). Because the identification of new ideas tends to level off quickly, 10–15 interviews are often sufficient to capture the most commonly held beliefs (14, 50). To increase the likelihood of covering a wide variety of beliefs, interviewees should be recruited from diverse backgrounds and stakeholder groups. However, with a small sample size, it remains unlikely that the interviewees are representative of the entire population. Follow-up surveys are more cost-effective for obtaining sample sizes that are large enough to confidently examine the prevalence of beliefs, how beliefs vary between groups of participants, or which factors drive decisions. However, conducting surveys without first conducting interviews introduces the risk that survey questions do not cover relevant beliefs or are phrased in ways that are difficult for respondents to understand. Below, we describe survey design in more detail.

Follow-Up Public Perception Surveys

Goals and Challenges.

When public perception surveys are conducted to inform the design of communication materials, the goal is to examine the prevalence of the specific beliefs expressed in initial interviews as well as how much those beliefs and other factors drive decisions. Such analyses need the increased statistical power of larger sample sizes. Knowledge questions should be designed to measure how well people know the facts that experts deem relevant for making informed decisions as well as the frequency of additional beliefs that have been identified by interviewees as relevant to their decisions.

Structured knowledge questions tend to be recommended for use with larger samples, because correct responses are easier to score with structured than open-ended questions. One main challenge is that structured questions can be inadvertently leading by providing respondents with cues that help them to arrive at the correct answer (51, 52). Knowledge tests that teach new information lead to overestimations of how much people actually know and fail to identify potential misunderstandings in need of intervention. True/false statements are recommended over multiple choice questions, because they are less likely to provide cues that artificially improve respondents’ scores (53). However, repeated exposure to false statements on true/false or multiple choice tests may lead to incorrect memories of these statements being true (54, 55). Providing formative feedback before respondents leave the survey session may potentially relieve some of this problem (56).

True/false questions can be followed up with simple assessments of confidence in knowledge. Both incorrect beliefs held with high confidence and correct beliefs held with low confidence are in need of intervention (34). Because confidence is naturally expressed in terms of being x% sure, confidence ratings for true/false statements should be assessed on a scale ranging from 50% (just guessing) to 100% (certainty). Research on calibration of confidence has proposed systematic methods for comparing overall confidence ratings across items with the percent of correct responses across those same items (57). Although this procedure doubles the number of questions, it may be better than assessing uncertainty by adding a “don't know” option to each true/false statement. Indeed, adding a “don't know” option tends to increase nonresponses, keeping respondents from expressing potentially meaningful beliefs, even if held with relatively lower confidence (58, 59). One compromise is to offer “maybe true” and “maybe false” options, which assess respondents’ uncertainty without giving them an explicit “don't know” option. Although this compromise lowers the response burden, it provides less information about confidence than x% sure confidence ratings.

Another challenge when designing surveys is to ask survey questions that are relevant to people’s experiences. The specific issues that are important may be gleaned from the initial interviews as well as the existing research literature on how people make decisions about the topic under consideration. Knowledge is not the only factor that is relevant to people’s informational needs. Individual differences in various abilities also affect how people respond to communications. The judgment and decision-making literature offers validated individual differences measures of decision-making competence (60), numeracy or the ability to use numerical information in decisions (61, 62), and graph literacy or the ability to understand graphical communications (63). In addition to ability measures, scales are also available for assessing people’s risk preferences in specific domains (64), environmental attitudes (65, 66), and state-specific emotions (67).

Because one goal of surveys is to assess what drives people’s decisions, questions should assess those decisions as well. Different question formats are available for assessing preferences for decision options. Survey participants may be asked to fill in the blank, choose the best option from a set, rank options in order of preference, or rate each option on a scale (1 = very bad; 7 = very good) (reviews in refs. 35 and 68). Because the question format may affect people’s answers (69), using multiple question formats allows for the computation of correlations between differentially elicited responses, thus revealing the consistency of the underlying preferences. To reduce reliance on self-reports, information about respondents’ actual behavior should ideally also be obtained through independent observations or archival records.

Another main challenge is to write survey questions in comprehensible terms to allow respondents to express what they know. Indeed, the survey design literature treats surveys as a form of communication between researchers and respondents (70). For such communication to be effective, researchers and respondents should interpret survey questions in the same way. To facilitate respondents’ understanding and reduce missing responses, survey designers can improve the readability of question wording in three ways. First, they should write the survey at grade levels appropriate for the intended audience. Average adult literacy in the United States varies between seventh- and ninth-grade levels (71). The Flesch–Kincaid readability measure (72) computes the grade level of education needed to read a text: a formula takes into account the number of words per sentence and the number of syllables per word (73, 74). Survey questions will also be easier to understand if they use the intuitive language of the intended audience by borrowing terms from the interviews described above.

The third strategy involves conducting cognitive pilot interviews, which are designed to reveal whether respondents interpret questions as intended and how variations in interpretations affect responses (35, 68, 75, 76). In cognitive pilot interviews, participants may be asked to read a draft of the survey out loud and think out loud while answering each question. Afterward, participants may also be asked to provide more detailed feedback about the survey. Such cognitive pilot interviews may reveal that even simplified wording may introduce ambiguity. For example, consumer surveys that ask about inflation expectations avoid the term inflation and ask about prices in general instead. Some respondents recognize that wording as referring to inflation, whereas others think about their personal price experiences (77). When thinking about personal price experiences, prices that have shown the largest changes—such as gas prices—are most likely to come to mind (78). Perhaps as a result, questions that directly ask about expectations for inflation show much less disagreement between respondents than questions about expectations for prices in general (79). Thus, survey questions need to use simple terms that are specific enough to communicate the question designers’ intent.

Even seemingly straightforward demographic questions may yield respondent interpretations that are not in line with researchers’ expectations. Like other researchers, we have noticed that respondents who are cohabitating in heterosexual relationships sometimes fail to choose the response option to indicate that they are married or living with a partner, because they interpret the term partner as reserved for homosexual relationships (80). Similar confusion has even been found for seemingly simple categorizations of race and ethnicity (81). Hence, before implementing large-scale surveys, it is key to conduct cognitive pilot interviews to find out how respondents may (mis)interpret the stated questions.

Common Survey Modes.

After cognitive pilot interviews have been conducted to ensure that respondents understand the survey questions as intended, public perception surveys can be self-administered on paper or online or interviewer-administered in person or on the phone. Choice of mode should, among other things, take into account the available time and funding, the abilities and preferences of the respondents, and the sensitivity of the survey topic (an overview is in ref. 35). For example, a benefit of interviewer-administered one-on-one surveys or experimenter-led group survey sessions is that researchers can interact with participants and give in-person instructions. However, mailing paper surveys or conducting internet surveys may be more efficient in terms of reaching geographically wider audiences and allowing people to participate when it suits them best. Researchers have found that response rates can be improved by inducing trust, sending follow-up reminders, and incentivizing participation (for example, by providing sufficient payment) (35).

Analyses.

Reports of survey results should include how commonly specific expert-identified facts are known and how prevalent interviewee-identified beliefs are in the population. Additional regression analyses should then examine how knowledge and attitudes are related to decisions. Thus, these analyses should reveal common misunderstandings that seem to drive behavior, which are most in need of intervention. For example, one study on HIV/AIDS revealed that misconceptions about how to use condoms that had initially been shared by a few interviewees were common, even in otherwise knowledgeable survey samples, and associated with their tendency to engage in unsafe sexual behaviors (34). Hence, interviews and surveys can provide communication designers with systematic insights about lay decision models, reflecting what their audience knows and still needs to know to make more informed decisions (14).

Sampling.

Researchers need to decide how to recruit respondents from among potential audiences of the communications that will be informed by the survey results. In the past, community groups have recruited their members for our surveys, who were then invited to donate part of their participation fees to the organization through which they were recruited (60). Random selection is most likely to yield an unbiased sample but may be difficult to implement if complete lists of specific audience members are unavailable. A diverse convenience sample may be sufficient if the researchers’ main goal is to examine the relationships between beliefs and behaviors. Formulas are available to examine the sample size needed to reliably conduct specific analyses on beliefs and behavior and compare subgroups in the population (35).

Case Studies

Two notable projects highlight how insights from interviews and surveys can help scientific experts to design communication materials, such as brochures or websites, for members of the general public. The first example pertains to the domain of public health, where experts have spent extensive efforts to promote abstinence, by which they mean having no sex until marriage, as a strategy for reducing adolescents’ risks of acquiring sexually transmitted infections. In the first step of the mental models approach, literature reviews and information from experts suggested that many sexuality education programs were not actually effective in terms of changing behavior (27, 82). Interviews and follow-up survey research with female adolescents led to several important insights. First, female adolescents had misconceptions that threatened the quality of their decisions. For example, some misinterpreted the recommended abstinence strategy as including anal sex, which actually poses increased risk of sexually transmitted infections (83). Unfortunately, teachers often omit taboo subjects like anal sex in school-based sexuality education (84), reducing the likelihood that communications will teach adolescents what they need to know to better protect themselves. Additionally, female adolescents perceived little control over sexual decisions (85, 86) and faced challenges communicating with romantic partners about safe sex strategies (87). We designed communications to teach female adolescents negotiation skills and risk reduction strategies in accurate and understandable terms using a DVD intervention that increased abstinence, increased condom use among those individuals who did have sex, and reduced the likelihood of recipients acquiring sexually transmitted infections 6 mo later (88).

A second example comes from a more technical domain. It refers to Carbon Capture and Sequestration (CCS), a technology that prevents carbon dioxide emissions from coal-fired power plants by putting them deep underground. Experts have suggested that CCS may be part of a low-carbon electricity generation portfolio aiming to curb climate change. Interviewees associated CCS with nuclear waste, worried about unintended negative effects on nature in the long term, and expressed preferences for low-carbon technologies such as wind and solar (89). Existing communications about CCS focused just on promoting that technology, making it difficult for recipients to make an informed comparison with other low-carbon alternatives about which they also had misunderstandings. For example, people mistakenly believe that nuclear power emits CO2 and that solar power is free (90). However, carefully designed communication materials that systematically provided information about the features of common low-carbon alternatives, while addressing prevalent knowledge gaps and misconceptions, yielded some willingness to consider CCS as part of a low-carbon electricity generation portfolio that involves multiple technologies (90).

Conclusions

The information provided here targets scientific experts at government agencies, nongovernmental organizations, or other organizations who seek to develop communication materials with the goal of informing individuals’ decisions, facilitating productive public debate, and if needed, promoting effective behavior change. To develop effective communication materials for members of the general public, scientific experts need to understand what information people need to know. The mental models approach to developing communications recognizes that people need information that not only fixes the gaps and misconceptions in their knowledge but also builds on their existing beliefs and preferred wording (Fig. 1). Interviews and follow-up surveys with members of the intended audience help to assess their informational needs, but scientific experts often lack training in social science research methods. We, therefore, provided an overview of how to conduct interviews to characterize people’s beliefs in their own terms and follow-up surveys to identify the prevalence of these beliefs and their relationships with decisions. We hope that this information will inspire interdisciplinary research involving scientific experts and social scientists, thus promoting the development of more effective communication materials about scientific topics relevant to the general public.

Based on the results of interviews and surveys, subsequent communication materials should be designed to target the most common misunderstandings that affect people’s decisions (14, 27, 29). Elsewhere, detailed guidelines are available for how to best present quantitative and qualitative information, how to improve the readability and usability of materials, and how to take into account the differential needs of audiences with low numeracy or low literacy (29). After drafting communication materials, it is generally recommended to conduct cognitive pilot interviews, in which members of the intended audience first read materials aloud and subsequently provide suggestions for improvement (14, 76). Any resulting revisions will need to be approved by domain experts to make sure that the content still reflects the relevant science accurately. Before disseminating the communication, evaluation studies with larger samples are needed to examine whether the designed communication does, indeed, lead to desired improvements on measures of understanding and informed decision-making (28).

Using the principles for developing effective communications need not be costly, because a large body of evidence already exists about people’s informational needs regarding specific topics. Nevertheless, developing effective communication strategies requires the adjustment of resources. Because of the high stakes that often ride on effective communications, such investments will be worthwhile. Financial analysts have estimated that 70% of private firms’ assets are intangibles, such as goodwill, which can be lost by ineffective communications (29). The reputations of other organizations also depend on their ability to communicate.

In addition to seeking input from audience members, the mental models approach promotes collaborations between experts from multiple disciplines. Scientific domain experts are needed to identify the key concepts that people need to understand to make informed decisions. Social scientists are needed to design interviews and surveys aiming to understand people’s beliefs as well as develop and test communications that target people’s ability to make informed decisions. The resulting evidence-based communications are more likely to address what people need to know to make more informed decisions, allowing them to obtain better outcomes for themselves and the society in which they live.

Acknowledgments

We thank Arwen Pieterse, Gabrielle Wong-Parodi, and two anonymous reviewers for their helpful comments. This work was supported by Center for Climate and Energy Decision Making Grant SES-0949710 through a cooperative agreement between the National Science Foundation and Carnegie Mellon University.

Footnotes

The authors declare no conflict of interest.

This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “The Science of Science Communication,” held May 21–22, 2012, at the National Academy of Sciences in Washington, DC. The complete program and audio files of most presentations are available on the NAS Web site at www.nasonline.org/science-communication.

This article is a PNAS Direct Submission. D.A.S. is a guest editor invited by the Editorial Board.

References

  • 1.Nickerson RS. How we know—and sometimes misjudge—what others know: Imputing one’s own knowledge to others. Psychol Bull. 1999;125(6):737–759. [Google Scholar]
  • 2.Dawes RM, Mulford M. The false consensus effect and overconfidence: Flaws in judgment or flaws in how we study judgment? Organ Behav Hum Decis Process. 2004;65(3):201–211. [Google Scholar]
  • 3.Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100(3):363–406. [Google Scholar]
  • 4.Fischhoff B, Slovic P, Lichtenstein S. Lay foibles and expert fables in judgments about risk. Am Stat. 1982;36(3b):240–255. [Google Scholar]
  • 5.Fischhoff B. Why (cancer) risk communication can be hard. J Natl Cancer Inst Monogr. 1999;25(25):7–13. doi: 10.1093/oxfordjournals.jncimonographs.a024213. [DOI] [PubMed] [Google Scholar]
  • 6.Leiserowitz AA. American risk perceptions: Is climate change dangerous? Risk Anal. 2005;25(6):1433–1442. doi: 10.1111/j.1540-6261.2005.00690.x. [DOI] [PubMed] [Google Scholar]
  • 7.Lorenzoni I, Pidgeon NF, O’Connor RE. Dangerous climate change: The role for risk research. Risk Anal. 2005;25(6):1387–1398. doi: 10.1111/j.1539-6924.2005.00686.x. [DOI] [PubMed] [Google Scholar]
  • 8.Attari SZ, DeKay ML, Davidson CI, Bruine de Bruin W. Public perceptions of energy consumption and savings. Proc Natl Acad Sci USA. 2010;107(37):16054–16059. doi: 10.1073/pnas.1001509107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Gardner G, Stern P. The short list: The most effective actions U.S. households can take to curb climate change. Environment Magazine. 2008;50(5):12–24. [Google Scholar]
  • 10.Schwartz B. The Paradox of Choice: Why More Is Less. New York: Harper Perennial; 2004. [Google Scholar]
  • 11.Keller C, Siegrist M, Gutscher H. The role of the affect and availability heuristics in risk communication. Risk Anal. 2006;26(3):631–639. doi: 10.1111/j.1539-6924.2006.00773.x. [DOI] [PubMed] [Google Scholar]
  • 12.Bell HM, Tobin GA. Efficient and effective? The 100-year flood in the communication and perception of flood risk. Environmental Hazards. 2007;7(4):302–311. [Google Scholar]
  • 13.Jungermann H, Schütz H, Thüring M. Mental models in risk assessment: Informing people about drugs. Risk Anal. 1988;8(1):147–155. doi: 10.1111/j.1539-6924.1988.tb01161.x. [DOI] [PubMed] [Google Scholar]
  • 14.Morgan MG, Fischhoff B, Bostrom A, Atman CJ. Risk Communication: A Mental Models Approach. Cambridge, United Kingdom: Cambridge Univ Press; 2002. [Google Scholar]
  • 15.Gentner D. In: International Encyclopedia of the Social and Behavioral Sciences. Smelser NJ, Bates PB, editors. Amsterdam: Elsevier; 2002. pp. 9683–9687. [Google Scholar]
  • 16.Gentner D, Stevens A, editors. Mental Models. Hillsdale, NJ: Erlbaum; 1983. [Google Scholar]
  • 17.Meyer D, Leventhal H, Gutmann M. Common-sense models of illness: The example of hypertension. Health Psychol. 1985;4(2):115–135. doi: 10.1037//0278-6133.4.2.115. [DOI] [PubMed] [Google Scholar]
  • 18.Kempton W. Two theories of home heat control. Cogn Sci. 1986;10(1):75–90. [Google Scholar]
  • 19.Nersessian NJ. In: Cognitive Models of Science. Giere RN, editor. Minneapolis: Univ of Minnesota Press; 1992. pp. 3–45. [Google Scholar]
  • 20.Aledort JE, Lurie N, Wasserman J, Bozzette SA. Non-pharmaceutical public health interventions for pandemic influenza: An evaluation of the evidence base. BMC Public Health. 2007;7:208–214. doi: 10.1186/1471-2458-7-208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Bruine De Bruin W, Fischhoff B, Brilliant L, Caruso D. Expert judgments of pandemic influenza risks. Glob Public Health. 2006;1(2):178–193. doi: 10.1080/17441690600673940. [DOI] [PubMed] [Google Scholar]
  • 22.Fischhoff B, Bruine de Bruin W, Güvenç U, Brilliant L, Caruso D. Analyzing disaster risks and plans: An avian flu example. J Risk Uncertain. 2006;33(1–2):131–149. [Google Scholar]
  • 23.Wood MD, Bostrom A, Bridges T, Linkov I. Cognitive mapping tools: Review and risk management needs. Risk Anal. 2012;32(8):1333–1348. doi: 10.1111/j.1539-6924.2011.01767.x. [DOI] [PubMed] [Google Scholar]
  • 24.Bostrom A, Atkinson E. In: Trust in Cooperative Risk Management: Uncertainty and Scepticism in the Public Mind. Siegrist M, Earle TC, Gutscher H, editors. London: Earthscan; 2007. pp. 173–186. [Google Scholar]
  • 25.Bostrom A, Fischhoff B, Morgan MG. Characterizing mental models of hazardous processes: A methodology and an application to radon. J Soc Issues. 1992;48(4):85–100. [Google Scholar]
  • 26.Bostrom A, Lashof D. In: Creating a Climate for Change: Communicating Climate Change—Facilitating Social Change. Moser S, Dilling L, editors. Cambridge, United Kingdom: Cambridge Univ Press; 2007. pp. 31–43. [Google Scholar]
  • 27.Bruine de Bruin W, Downs JS, Fischhoff B. In: Thinking with Data. Lovett MC, Shah P, editors. Mahwah, NJ: Erlbaum; 2007. pp. 421–439. [Google Scholar]
  • 28.Downs JS. In: Communicating Risks and Benefits: An Evidence-Based User’s Guide. Fischhoff B, Brewer NT, Downs JS, editors. Silver Spring, MD: Food and Drug Administration; 2011. pp. 11–17. [Google Scholar]
  • 29.Fischhoff B, Brewer NT, Downs JS. Communicating Risks and Benefits: An Evidence-Based User’s Guide. Silver Spring, MD: Food and Drug Administration; 2011. [Google Scholar]
  • 30.Bostrom A, Morgan MG, Fischhoff B, Read D. What do people know about global climate change?: 1. Mental models. Risk Anal. 1994;14(6):959–970. [Google Scholar]
  • 31.Krishnamurti T, et al. Preparing for smart grid technologies: A behavioral decision research approach to understanding consumer expectations about smart meters. Energy Policy. 2012;41:790–797. [Google Scholar]
  • 32.Kvale S. Interviews: Learning the Craft of Qualitative Research Interviewing. Thousand Oaks, CA: Sage Publications; 2009. [Google Scholar]
  • 33.Spradley J. The Ethnographic Interview. New York: Holt, Rinehard and Winston; 1979. [Google Scholar]
  • 34.Bruine de Bruin W, Downs JS, Fischhoff B, Palmgren C. Development and evaluation of an HIV/AIDS knowledge measure for adolescents focusing on misunderstood concepts. J HIV AIDS Prev Child Youth. 2007;8(1):35–57. [Google Scholar]
  • 35.Dillman DA, Smyth JD, Christian LM. Internet, Mail, and Mixed-Mode Surveys. The Tailored Design Method. New York: Wiley; 2009. [Google Scholar]
  • 36.Moum T. Mode of administration and interviewer effects in self-reported symptoms of anxiety and depression. Soc Indic Res. 1998;45(1–3):279–318. [Google Scholar]
  • 37.Aquilino WS. Mode effects in surveys of drug and alcohol use: A field experiment. Public Opin Q. 1994;58(2):210–240. [Google Scholar]
  • 38.Greenfield TK, Midanik LT, Rogers JD. Effects of telephone versus face-to-face interview modes on reports of alcohol consumption. Addiction. 2000;95(2):277–284. doi: 10.1046/j.1360-0443.2000.95227714.x. [DOI] [PubMed] [Google Scholar]
  • 39.Shuy RW. In: Handbook of Interview Research: Context and Method. Gubrium JG, Holstein JA, editors. Thousand Oaks, CA: Sage Publications; 2002. pp. 537–555. [Google Scholar]
  • 40.Klima K, Bruine de Bruin W, Morgan MG, Grossmann I. Public perceptions of hurricane modification. Risk Anal. 2012;32(7):1194–1206. doi: 10.1111/j.1539-6924.2011.01717.x. [DOI] [PubMed] [Google Scholar]
  • 41.Willis HH, DeKay ML, Morgan MG, Florig HK, Fischbeck PS. Ecological risk ranking: Development and evaluation of a method for improving public participation in environmental decision making. Risk Anal. 2004;24(2):363–378. doi: 10.1111/j.0272-4332.2004.00438.x. [DOI] [PubMed] [Google Scholar]
  • 42.Levine JM, Moreland RL. In: Advanced Social Psychology. Tesser A, editor. New York: McGraw-Hill; 1995. [Google Scholar]
  • 43.Kaplowitz MD, Hoehn JP. Do focus groups and individual interviews reveal the same information for natural resource valuation? Ecol Econ. 2001;36(2):237–247. [Google Scholar]
  • 44.Downs JS, de Bruin WB, Fischhoff B. Parents’ vaccination comprehension and decisions. Vaccine. 2008;26(12):1595–1607. doi: 10.1016/j.vaccine.2008.01.011. [DOI] [PubMed] [Google Scholar]
  • 45.Neuendorf KA. The Content Analysis Guidebook. Thousand Oaks, CA: Sage Publications; 2002. [Google Scholar]
  • 46.Hruschka DJ, et al. Reliability coding open-ended data: Lessons learned from HIV behavioral research. Field Methods. 2004;16(3):307–331. [Google Scholar]
  • 47.Bernard HR, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks, CA: Sage Publications; 2010. [Google Scholar]
  • 48.Cohen J. Weighted Kappa: Nominal scale agreement or partial credit. Psychol Bull. 1968;70:213–220. doi: 10.1037/h0026256. [DOI] [PubMed] [Google Scholar]
  • 49.Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1(1):77–89. [Google Scholar]
  • 50.Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82. [Google Scholar]
  • 51.Krosnick JA. In: Context Effects in Social and Psychological Research. Schwarz N, Sudman S, editors. New York: Springer; 1991. [Google Scholar]
  • 52.Attali Y, Bar-Hillel M. Guess where: The position of correct answers in multiple-choice test items as a psychometric variable. Journal of Educational Measurement. 2003;40(2):109–128. [Google Scholar]
  • 53.de Bruin WB, Fischhoff B. The effect of question format on measured HIV/AIDS knowledge: Detention center teens, high school students, and adults. AIDS Educ Prev. 2000;12(3):187–198. [PubMed] [Google Scholar]
  • 54.Roediger HL, 3rd, Marsh EJ. The positive and negative consequences of multiple-choice testing. J Exp Psychol Learn Mem Cogn. 2005;31(5):1155–1159. doi: 10.1037/0278-7393.31.5.1155. [DOI] [PubMed] [Google Scholar]
  • 55.Toppino TC, Brochin HA. Learning from tests: The case of true/false examinations. J Educ Res. 1989;83(2):119–124. [Google Scholar]
  • 56.Shute VJ. Focus on formative feedback. Rev Educ Res. 2008;78(1):153–189. [Google Scholar]
  • 57.Keren G. Calibration and probability judgments: Conceptual and methodological issues. Acta Psychol (Amst) 1991;77(3):217–273. [Google Scholar]
  • 58.Muijtens H, van Mameren H, Hoogenboom E, van der Vleuten CPM. The effect of a “don’t know” option on test scores: Number-right and formula scoring compared. Med Educ. 2002;33(4):267–275. doi: 10.1046/j.1365-2923.1999.00292.x. [DOI] [PubMed] [Google Scholar]
  • 59.Krosnick JA, et al. The impact of “no opinion” response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opin Q. 2002;66(3):371–4023. [Google Scholar]
  • 60.Bruine de Bruin W, Parker AM, Fischhoff B. Individual differences in adult decision-making competence. J Pers Soc Psychol. 2007;92(5):938–956. doi: 10.1037/0022-3514.92.5.938. [DOI] [PubMed] [Google Scholar]
  • 61.Lipkus IM, Samsa G, Rimer BK. General performance on a numeracy scale among highly educated samples. Med Decis Making. 2001;21(1):37–44. doi: 10.1177/0272989X0102100105. [DOI] [PubMed] [Google Scholar]
  • 62.Cokely ET, Galesic M, Schulz E, Ghazal S, Garcia-Rotamero R. Measuring risk literacy: The Berlin numeracy test. Judgm Decis Mak. 2012;7(1):25–47. [Google Scholar]
  • 63.Okan Y, Garcia-Rotamero R, Cokely ET. Individual differences in graph literacy: Overcoming denominator neglect in risk comprehension. J Behav Decis Making. 2011;25(4):390–401. [Google Scholar]
  • 64.Weber EU, Blais AR, Betz NE. A domain-specific risk attitude scale: Measuring risk perceptions and risk behaviors. J Behav Decis Making. 2002;15(4):263–290. [Google Scholar]
  • 65.Dunlap RE, Van Liere KD, Mertig AG, Jones RE. Measuring endorsement of the New Ecological Paradigm: A revised NEP scale. J Soc Issues. 2000;56(3):425–442. [Google Scholar]
  • 66.Bostrom A, et al. Causal thinking and support for climate change policies: International survey findings. Glob Environ Change. 2012;22(1):210–222. [Google Scholar]
  • 67.Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J Pers Soc Psychol. 1988;54(6):1063–1070. doi: 10.1037//0022-3514.54.6.1063. [DOI] [PubMed] [Google Scholar]
  • 68.Converse JM, Presser S. Survey Questions. Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage Publications; 1986. [Google Scholar]
  • 69.Schwarz N. Self-reports: How the questions shape the answers. Am Psychol. 1999;54(2):93–105. [Google Scholar]
  • 70.Schwarz N. Cognition and Communication: Judgmental Biases, Research Methods, and the Logic of Conversation. Erlbaum, Hillsdale, NJ; 1996. [Google Scholar]
  • 71.Neuhauser L, Paul K. In: Communicating Risks and Benefits: An Evidence-Based User’s Guide. Fischhoff B, Brewer NT, Downs J, editors. Washington, DC: Food and Drug Administration; 2011. [Google Scholar]
  • 72.Velez P, Ashworth SD. The impact of item readability on the endorsement of the midpoint response in surveys. Surv Res Methods. 2007;1(2):69–74. [Google Scholar]
  • 73.Kincaid JP, Fishburne RP, Jr, Rogers RL, Chissom BS. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel, Research Branch Report 8-75. Millington, TN: Naval Technical Training, US Naval Air Station; 1975. [Google Scholar]
  • 74.Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav. 2006;33(3):352–373. doi: 10.1177/1090198105277329. [DOI] [PubMed] [Google Scholar]
  • 75.Bruine de Bruin W. In: Perspectives on Framing. Keren G, editor. London: Taylor & Francis; 2011. pp. 303–324. [Google Scholar]
  • 76.Ericsson KA, Fox MC. Thinking aloud is not a form of introspection but a qualitatively different methodology: Reply to Schooler (2011) Psychol Bull. 2011;137(2):351–354. doi: 10.1037/a0022388. [DOI] [PubMed] [Google Scholar]
  • 77.Bruine de Bruin W, et al. Expectations of inflation: The role of financial literacy and demographic variables. J Consum Aff. 2010;44(2):381–402. [Google Scholar]
  • 78.Bruine de Bruin W, van der Klaauw W, Topa G. Inflation expectations: The biasing effect of thoughts about specific prices. J Econ Psychol. 2011;32(5):834–845. [Google Scholar]
  • 79.Bruine de Bruin W, et al. The effect of question wording on consumers’ reported inflation expectations. J Econ Psychol. 2012;4(4):749–757. [Google Scholar]
  • 80.Hunter J. Report on Cognitive Testing of Cohabitation Questions. Washington, DC: US Bureau of the Census; 2005. [Google Scholar]
  • 81.McKenney NR, Bennett CE. Issues regarding data on race and ethnicity: The Census Bureau experience. Public Health Rep. 1994;109(1):16–25. [PMC free article] [PubMed] [Google Scholar]
  • 82.Kirby DB. The impact of abstinence and comprehensive sex education programs on adolescent sexual behavior. Sex Res Social Policy. 2008;5(3):18–27. [Google Scholar]
  • 83.Schuster MA, Bell RM, Kanouse DE. The sexual practices of adolescent virgins: Genital sexual activities of high school students who have never had vaginal intercourse. Am J Public Health. 1996;86(11):1570–1576. doi: 10.2105/ajph.86.11.1570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Halperin DT. Heterosexual anal intercourse: Prevalence, cultural factors, and HIV infection and other health risks, Part I. AIDS Patient Care STDS. 1999;13(12):717–730. doi: 10.1089/apc.1999.13.717. [DOI] [PubMed] [Google Scholar]
  • 85.Amaro H. Love, sex, and power. Considering women’s realities in HIV prevention. Am Psychol. 1995;50(6):437–447. doi: 10.1037//0003-066x.50.6.437. [DOI] [PubMed] [Google Scholar]
  • 86.Gutierrez L, Oh HJ, Gillmore MR. Toward an understanding of (em)power(ment) for HIV/AIDS prevention in adolescent women. Sex Roles. 2000;42(7–8):581–611. [Google Scholar]
  • 87.Crosby RA, DiClemente RJ, Wingood GM, Rose E, Lang D. Correlates of continued risky sex among pregnant African American teens: Implications for STD prevention. Sex Transm Dis. 2003;30(1):57–63. doi: 10.1097/00007435-200301000-00012. [DOI] [PubMed] [Google Scholar]
  • 88.Downs JS, et al. Interactive video behavioral intervention to reduce adolescent females’ STD risk: A randomized controlled trial. Soc Sci Med. 2004;59(8):1561–1572. doi: 10.1016/j.socscimed.2004.01.032. [DOI] [PubMed] [Google Scholar]
  • 89.Palmgren CR, Morgan MG, Bruine de Bruin W, Keith DW. Initial public perceptions of deep geological and oceanic disposal of carbon dioxide. Environ Sci Technol. 2004;38(24):6441–6450. doi: 10.1021/es040400c. [DOI] [PubMed] [Google Scholar]
  • 90.Fleishman LA, De Bruin WB, Morgan MG. Informed public preferences for electricity portfolios with CCS and other low-carbon technologies. Risk Anal. 2010;30(9):1399–1410. doi: 10.1111/j.1539-6924.2010.01436.x. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES