Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2013 Aug 12;110(Suppl 3):14048–14054. doi: 10.1073/pnas.1212726110

Communicating science in politicized environments

Arthur Lupia a,b,1
PMCID: PMC3752174  PMID: 23940336

Abstract

Many members of the scientific community attempt to convey information to policymakers and the public. Much of this information is ignored or misinterpreted. This article describes why these outcomes occur and how science communicators can achieve better outcomes. The article focuses on two challenges associated with communicating scientific information to such audiences. One challenge is that people have less capacity to pay attention to scientific presentations than many communicators anticipate. A second challenge is that people in politicized environments often make different choices about whom to believe than do people in other settings. Together, these challenges cause policymakers and the public to be less responsive to scientific information than many communicators desire. Research on attention and source credibility can help science communicators better adapt to these challenges. Attention research clarifies when, and to what type of stimuli, people do (and do not) pay attention. Source credibility research clarifies the conditions under which an audience will believe scientists’ descriptions of phenomena rather than the descriptions of less-valid sources. Such research can help communicators stay true to their science while making their findings more memorable and more believable to more audiences.

Keywords: belief change, civic education, political communication, science communication


Members of the scientific community share a frustration: many attempts to communicate science are badly received (14). This frustration is particularly evident in politicized environments: that is, settings where decisions on divisive public issues must be made.

These communicative frustrations are salient because many scientists work hard to make socially valuable discoveries. Science can help nonscientists make better decisions. However, scientists often find that their advice is ignored or willfully misinterpreted. This article seeks to help science communicators expand the set of circumstances in which they can achieve better outcomes.

In some respects, the difficulty of communicating science to broader audiences is easily explained. Scientists discover new phenomena as well as new relationships among existing phenomena. Describing these discoveries and relationships often requires new language or using existing language in unusual ways. Many nonscientists, however, find our lexicon difficult to access: they see many scientific presentations as needlessly abstract and disconnected from their lives (5, 6). Audiences who see scientific presentations in these ways have less motivation to pay attention to them (7). If such motivations are sufficiently low, seeds for communicative failure are sown.

We, as scientists and science communicators, can improve how scientific information is conveyed to policymakers and the public. One way to realize this potential is to build from a social scientific knowledge base that can help communicators develop more realistic expectations about when others will pay attention to us and when they will believe what we write and say. This knowledge base can help science communicators avoid common presentation mistakes and make it more likely that our audiences acquire relevant knowledge. We need not engage in “spin,” manipulation, or “dumbing down” our presentations to communicate more effectively. Social science reveals multiple ways for communicators to increase the likelihood that, and the range of audiences for whom, they can successfully convey scientific information.

To allow an article-length presentation, I focus primarily on two communication-related concepts: attention and source credibility. I focus on these two concepts because they are two factors over which science communicators have some measure of discretion when developing communication strategies.

Learning from scientific presentations, for example, requires that an audience pay attention to its content. For any potential learner, attention is a scarce resource. People are physically capable of paying attention to only a tiny fraction of their environment (8). As a result, every single person ignores almost all of the information that nature and other people present to them. Individuals do this not because they want to. People have relatively little control over their attentive capacity. A consequence of these capacity limits is that even the most committed listener can attend to only a fraction of the content to which they are exposed.

A person who pays attention to new information also evaluates it. One important factor that affects such evaluations is the believability, or credibility, of its source. Social scientists use communication models and a range of experiments to clarify how potential learners assess a speaker’s credibility (9). These findings often contradict science communicators' intuitions about how others will interpret their words.

For people who seek to communicate in politicized environments, understanding source credibility at more than an intuitive level is vital. This necessity is true because, in such environments, people often hear conflicting claims about the implications of scientific findings for social problems. Complicating matters is the fact that politicized environments often induce suspicions about science communicators’ true motives or expertise. Therefore, questions arise about whether scientists can really be trusted. Research on source credibility clarifies the conditions under which audiences in politicized environments will believe what a scientist has to say.

In sum, no science communicator is immune from the fact that attention capacity limits cause individuals to forget almost everything that any scientist ever says to them or the fact that listeners evaluate a speaker’s credibility in particular ways. Understanding these phenomena, however, can help us adapt to them. Science communicators who better understand basic aspects of attention and credibility can more effectively position themselves to make their discoveries more memorable and believable to more audiences.

Attention and Motivation in Reactions to New Information

Science communicators seek to change an audience’s beliefs and to increase its members’ knowledge about scientific phenomena. By “belief,” I mean a cognitively stored association that relates objects and attributes (e.g., “Anne believes that the climate is changing.”) (10). By “belief change,” I refer to a cognitive process that results in a mind that believes different things ex post than it did ex ante (e.g., “I used to believe that the sun rotates around the earth, now I believe the opposite”). By “knowledge,” I mean the subset of beliefs that can be labeled as having positive truth-value because of their correspondence with reality. With these definitions in hand, I can restate our objective: a science communicator seeks to cause others to change their beliefs in ways that correspond to greater knowledge of a scientific finding. But, how do beliefs change?

Belief change is a product of evolving physical structures and biological processes within a brain. Belief change requires changes in the structure or performance of neurons (brain cells) within neural networks (i.e., sets of neurons that are physically connected or functionally related in a nervous system) (11). For example, if you think “red” when I say “wagon,” your reaction is a manifestation of a physical and chemical relationship between clusters and networks of brain cells that store “wagon” and “red” as relevant attributes. Suppose, for example, that you did not initially know that a wagon could be red. Suppose further that a presentation helps you to realize, and later recall, that not only can a wagon be red, but that many wagons are indeed red. Subsequent recollections of this conceptual association are a consequence of networks and clusters of red-attribute–representing brain cells changing their physical or chemical relationships to networks of wagon-attribute–representing brain cells (12). These changes can increase relevant activation potentials and, hence, alter the likelihood that the next time that the person thinks about a wagon, red will also come to mind. Belief change occurs only if parts of these associational networks receive an electrochemical fuel that stimulates physical growth in some of the networks’ brain cells or changes in chemical activity within and across these networks (13, 14). This fueling process is propagated by blood-flow variations, which themselves are propagated by the manner in which a person perceives stimuli. If a communicator wants to teach an audience to align a particular set of beliefs with a particular set of scientific findings, the words and images that the communicator presents to an audience must alter the audience members’ blood flow in ways that cause the fuel to go to the needed brain areas (15). Although this fueling process has complex properties, one property is key: fuel requires attention (15).

What people often call “attention” is associated with a concept called “working memory” (13). Working memory provides temporary storage for new information while it is being processed. The capacity of working memory is very limited. Scientists have evaluated this capacity in many ways. One famous study used reading-comprehension tests to produce a widely cited result: seven plus or minus two chunks (16). A chunk is a conceptual unit. The unit can represent a single attribute of a single object or it can bring to mind a particular relationship between attributes and objects. Although other evaluations of working memory produce different estimates, all estimates find its capacity to be of a similar order of magnitude (17).

An implication of research on working memory is that all people, whether expert or novice in a particular field, can pay attention to only a small number of stimuli at any given time. Although the number of available chunks limits every person’s ability to pay attention to new information, a common difference between experts and novices is that an expert’s few chunks store more information than a novice’s few chunks. Experts outperform novices at tasks not because their working memories produce more chunks, but because a typical expert chunk carries more information than a novice chunk (18).

So, when a scientist attempts to convey a particular piece of information to another person at a particular moment, that piece of information is involved in a competition for one of the person’s few available chunks with all other phenomena to which that person can potentially pay attention. The competitors for that person’s attention include information that the scientist gave earlier in the presentation, potentially distracting attributes of objects that are in the room where the scientist is conveying the information, and any number of things that are not in the room, including past events and possible future occurrences that may come to mind. Compared with all of the things to which a person can possibly pay attention at a given moment, working memory’s capacity limits are especially small.

Which competitors win this competition? A combination of automatic and executive control functions in the brain make a person much more likely to concentrate on particular aspects of their environment (19). In times of distress, for example, or when a person perceives a threatening stimulus, these processes induce selective attention to external stimuli associated with the threat. Knowing this much about attention yields a simple rule that can help science communicators earn the attention of others: stimuli that a person perceives as being immediately relevant to their ability to achieve high-value aspirations or ward off significant threats are far better positioned than other stimuli to win a person’s attention (20).

One widely cited study documents a representative example of this phenomenon. Ohman et al. (21) exposed experimental subjects to a sequence of 20- by 30-cm photographic grids for 1,200 ms each. In each of three experiments, subjects were asked to identify whether the photographs in a given grid all belonged to a single category (snakes, spiders, flowers, or mushrooms) or whether the images in the grid came from multiple categories. Response times in all three experiments were significantly faster when the task involved fear-relevant pictures rather than fear-irrelevant pictures (P < 0.0001). Moreover, although subjects’ ability to identify fear-irrelevant pictures was sensitive to the order of display, display order did not affect the speed at which subjects identified fear-relevant pictures. Participants in the third experiment were selected for being especially fearful of spiders or snakes. Compared with a control group of low-fear participants, these participants were far quicker to identify images of fearful objects, but no quicker in locating nonfearful objects.

Such research establishes the potential benefit to science communicators of conveying materials in ways that speak directly to audience members’ affective triggers (2224). A recent example of such a strategy is an attempt to convey an implication of climate change to a large audience of nonscientists. One likely consequence of global warming is rising sea levels. Although rising seas can be described as an abstract global phenomenon, scientists can also use models to estimate the effect of sea level rise on specific neighborhoods and communities (25, 26). Attempts to highlight these local climate change implications have gained new attention for scientific information in a number of high-traffic communicative environments (27). These presentations also have helped members of the media explain how rising seas are linked to the probability of extreme weather events, such as Hurricane Sandy and other large storms that have wreaked havoc on large metropolitan areas (28).

So far, we have established that changing beliefs requires attention and that the capacity of working memory is small in comparison with the set of things to which a person can pay attention at any typical moment. Another factor that complicates effective science communication is that speakers sometimes have misleading intuitions about the extent to which others are paying attention to them (29). A common source of such errors is found in the visual and oral cues that people offer one another when communicating (30). For example, people nod at certain moments to signal that they are paying attention to a speaker and comprehending their message. However, people who seek to act in socially desirable ways, or people who believe that offering an affirmative comprehension signal will allow them to leave an unwanted conversation, also send such signals (31). In other words, people who have become inattentive to the content of a speaker’s utterances, but who recognize that the speaker has paused or stopped speaking often give visual cues to suggest that attention is still being paid. Sometimes speakers can detect such inattention, sometimes not (32, 33). A common result is that speakers become overconfident about the extent to which others are paying attention to them (34, 35). Providing information that pertains directly to an audience’s affective triggers, as described above, increases the likelihood of winning attention competitions and provides one way to mitigate potential negative consequences of communicative overconfidence.

Science communicators can also benefit by obtaining information about what an audience initially believes about the new information they are conveying. This claim is true because people assign meaning to the new information to which they attend by comparing it with what they already believe (36). Thus, what audience members learn from a scientific presentation is jointly influenced by the attributes of new information and the audience’s preexisting beliefs and knowledge (12). When new information is presented in ways that audience members cannot easily comprehend, the members’ prior beliefs have an increasing influence on how they interpret the new information (37).

If audience members also see such information as threatening, a common reaction is for them to generate counterarguments. That is, individuals devote mental energy to the production of reasons for discounting the relevance of, or ignoring, threatening information (38, 39). This reaction is akin to a flight response.

An experiment on public views of carbon nanotubes (CNT) reveals how a person’s prior beliefs and feelings about a phenomenon can affect their processing of subsequent information. Druckman et al. (40) recruited 621 subjects at polling places in Cook County, Illinois and asked them to take an Election Day exit poll. During this poll [conducted at time 1 (T1)], subjects were randomly assigned to receive different information about CNT. Some were given positive information: that CNT can reduce energy costs. Others were given negative information: that there is a CNT-related health controversy. Ten days later (T2), the researchers conducted a follow-up interview with 206 of the participants using an Internet survey. All subjects were given identical new information about CNT, including both economic benefits (positive) and environmental risks (negative). When asked to evaluate the new information, subjects who originally received positive information were more likely to rate the new positive information as “highly effective” and were less likely to favorably evaluate the new negative information. Subjects who initially received negative information showed the opposite pattern. Thus, for these individuals (who started with low initial levels of knowledge about CNTs), a small amount of information at T1 had large impact at T2.

Many communicators base presentational strategies on the premise that if they tell an audience what they know, then the inherent quality and virtue of their claims will automatically lead audiences to pay attention. The research described in this section clarifies when communicators can expect to earn an audience’s attention. The findings show that people cannot pay attention to all available information and that whether and how people pay attention to a given piece of information depends on their prior feelings about, and experiences with, the topic. Science communicators who base their strategies on these insights will be better positioned to present information that makes their science more likely to attract others' attention.

Source Credibility in Politicized Environments

If a science communicator can gain the attention of policymakers or members of the public, how will these audiences interpret the information that she or he seeks to convey? Many communicators are surprised to find that descriptions that they have offered successfully in academic contexts are met with skepticism by broader audiences. Thus, how does communication in politicized environments differ from communication in environments with which scientists are more familiar?

To clarify my answer to this question, I need to clarify the definition of a key term: politics. By “politics,” I mean the mechanisms by which societies attempt to manage conflicts that are not otherwise easily resolved. Issues that people typically perceive as “political” are ones over which salient social disagreements persist (41). When issues cease to have this quality, they tend not to be viewed as political. Child labor, for example, was once a contested political issue in American politics because people held, and were willing to publicly voice, different points of view about the propriety of children working long hours in factories (42). Early in the industrial age, children had worked on family farms and helped with other endeavors critical to life. Many people who advocated for child labor argued that it was natural, and even beneficial, for children to contribute to family income by laboring in factories and mills. Over time, however, a social consensus emerged that children should not work in factories. This consensus became codified in law and policy and is now routinely implemented in practice. Today, few Americans consider the issue political. Hence, political issues are the ones over which deep public conflicts persist.

In politicized contexts, a class of political entrepreneurs seeks leverage for favored candidates or causes. Leverage matters because political outcomes typically require the support of a coalition of actors (e.g., an electoral or legislative majority). Leverage helps entrepreneurs build and maintain supportive coalitions.

Entrepreneurs of all kinds, from candidates for national office to street-level advocates for specific policies, seek leverage through language. Potential leverage can be found in the fact that there are often multiple ways to describe an idea (4346). Entrepreneurs often seek to describe ideas in ways that can lead more people to support their cause. Herein lies an important challenge for those who want to communicate science in politicized environments. If a political entrepreneur sees an opportunity to reinterpret a scientist’s claims in ways that can increase the entrepreneur’s leverage, we should not be surprised when the entrepreneur actively seeks to promote his or her reinterpretation.

To get a sense of just how common such attempts at reinterpretation are in political contexts, consider the 2008 election-time controversy over the phrase “lipstick on a pig.” Variations of the phrase date back to the eighteenth century (47); it refers to the idea that cosmetic changes are not sufficient to turn a bad idea into a good one. In the decade before the 2008 election, the phrase was used by many politicians to suggest that the other side’s policies could not be rescued by giving them new names. In 2004, Vice President Richard Cheney used the term to describe presidential nominee John Kerry’s defense stance:

THE VICE PRESIDENT: …Now, in the closing days of this campaign, John Kerry is running around talking tough. He’s trying every which way to cover up his record of weakness on national defense. But he can't do it. It won't work. As we like to say in Wyoming, you can put all the lipstick you want on that pig, but at the end of the day it’s still a pig. (Applause.) That’s my favorite line. (Laughter.) (48)

In 2008, presidential nominee Barack Obama conveyed a similar sentiment with respect to the relationship between presidential nominee John McCain’s policy stances and those of President George W. Bush:

SENATOR OBAMA: John McCain says he’s about change too, and so I guess his whole angle is, Watch out George Bush—except for economic policy, health care policy, tax policy, education policy, foreign policy and Karl Rove-style politics—we’re really going to shake things up in Washington. That’s not change. That’s just calling something the same thing something different. You know you can put lipstick on a pig, but it’s still a pig (49).

The 2004 use of the term “lipstick on a pig” did not generate much controversy. The same was not true in 2008. A difference between the 2004 and 2008 uses is that Obama’s use occurred just days after the Republican Party had put forward its first female vice presidential nominee, Governor Sarah Palin of Alaska. During her stump speeches, Governor Palin featured “lipstick” in a widely seen, self-referential punch line “You know the difference between a hockey mom and a pit bull? Lipstick” (50).

The ensuing days featured charges and counter charges by political entrepreneurs about the true meaning of Obama’s remark. Congressperson Thelma Drake (R-VA), for example, issued a press release (51) interpreting Senator Obama’s words as follows, “Rather than delivering on his promise of hope and change, Barack Obama sunk to a new low with his remarks today regarding Gov. Sarah Palin.” Reports suggested that many people who had already been supporting McCain were similarly upset by the remarks, but those who were already supporting Obama thought that his remarks were being taken out of context (52). This was one of many instances where politically motivated individuals (of both major political parties) attempted to convert the lack of an exact relationship between concept and language into leverage for their favored causes.

A large body of research examines how people choose what and whom to believe in such situations, situations where speakers are competing to influence public perceptions (53, 54). A key concept in this research is source credibility, the extent to which an audience perceives a communicator as someone whose words they would benefit from believing. People often assume that elements of a speaker or writer’s true character (e.g., honest), demographic attributes (e.g., a woman), or academic pedigree (e.g., “I have a PhD in physics” or “I have written highly cited work on climate change”) is sufficient for a person to be considered a credible source of information. Research shows this assumption is incorrect. Although there are conditions under which such factors can be correlated with source credibility, these factors do not determine source credibility.

Source credibility is more accurately described as a perception that is bestowed by an audience (53). When an audience’s perception of a writer or speaker differs from the writer or speaker’s true attributes, the perception, and not the reality, determines the extent to which the audience will believe the speaker. Social scientists use experiments and models to study which factors make a source credible. Experiments document the kinds of attributes that differentiate speakers who change a listener’s beliefs from speakers who cannot change beliefs, even if they say the same thing (54). Models clarify how various combinations of speaker attributes, listener perceptions, incentives, and other contextual factors affect the degree to which one person is willing to believe another (55).

Models and experiments, when used together, clarify the factors that most influence source credibility. To see why this is the case, consider that many source credibility experiments identify observable speaker attributes that correlate with source credibility. Most of these experiments vary a single value of a single factor to document such a correspondence. Over time, an increasing number of attributes, such as sex, celebrity status, physical attractiveness, and partisan identification have been shown to correspond to increasing source credibility in controlled settings (54, 56). Most communicators, however, have multiple attributes upon which audiences can base credibility judgments. Understanding the extent to which an audience will find a speaker credible depends on how the audience weighs these attributes in their perceptions.

Models clarify these weighting dynamics and can help relate the findings of individual source credibility experiments to multifaceted communication contexts. In these models, listener perceptions, speaker attributes, and potentially relevant environmental factors are given mathematical analogs. Scholars use these analogs to identify what kinds of communication outcomes are, and are not, logically reconcilable with thousands of possible combinations of speaker attributes, listener perceptions, and contextual variables. These models produce general theorems and testable hypotheses about the conditions under which one person will find another credible.

In one such model (53), an interaction between a speaker and a listener is characterized. Here, the speaker is the “source” and “credibility” reflects the extent to which the listener believes what the speaker says. In the model, the listener has a decision to make (e.g., to support or oppose a particular policy proposal). The speaker may possess information that can help the listener make a more knowledgeable decision.

A focal variable in the model is the speaker’s stake in the listener’s decision. The speaker may, for example, benefit from leading the listener to make a decision that they would not make if were better informed. In other words, there are certain values of key variables in the model that would give the speaker an incentive to mislead the listener. For other values of these variables, the speaker would want to convey truthful information. The speaker’s and listener’s well-being are represented by utility functions. Utility functions in this model are defined with respect to the possible consequences of the listener’s decision for the listener and the speaker. A player receives higher utility when a communicative outcome leads to an outcome that he or she prefers.

This model produces a set of theorems and testable hypotheses about conditions under which the listener will find the speaker credible. To describe these findings with greater accuracy, a few definitions are needed. “Commonality of interests” is the extent to which the speaker’s and the listener’s utility functions overlap. In other words, the listener and speaker have common interests when they want similar outcomes from the speaker’s communicative attempt. One factor that the model shows to be critical to understanding source credibility is the listener’s perception of the extent to which she and the speaker have common interests. Another critical factor is perceived relative expertise. In the model, the listener has uncertain beliefs about the consequences of her decision. These beliefs are represented as probability distributions over the set of possible consequences. “Relative expertise” refers to the extent to which the speaker knows more about these consequences than the listener. We say that a speaker has relative expertise when the probability distribution that characterizes the speaker’s belief about the consequences of the listener’s action places more mass on the true consequence than does the probability distribution that characterizes the listener’s belief.

With these definitions in hand, we can draw from the model’s main theorem a set of testable hypotheses about source credibility. The key point to notice in these statements is that it is the listener’s perception of interest commonality and relative expertise, rather than the real values of these factors, that directly influence source credibility:

  • Actual relative expertise is neither necessary nor sufficient for source credibility.

  • Actual common interests are neither necessary nor sufficient for source credibility.

  • The following conditions are individually necessary and collectively sufficient for source credibility: the listener must perceive the speaker to have sufficiently common interests and the listener must perceive the speaker to have relative expertise. In the presence of external forces, such as sufficiently high verification likelihoods, penalties for lying, or communication costs, the extent to which perceived common interests are required decreases.

The last hypothesis describes a set of external forces that can affect how interest commonality and relative expertise affect source credibility. Penalties for lying, the threat that a claim will be verified, and any factors that make communication costly are attributes of a communicative environment that can affect a speaker’s motivation and incentives. These factors can induce a speaker who would otherwise seek to mislead a listener to provide truthful information instead. For example, a listener who encounters a speaker in the context of significant penalties for lying (e.g., perjury fines), can infer that the speaker is either telling the truth or is telling a kind of lie that makes the fine worth paying. If such a penalty is absent, the listener would believe that the speaker would only tell lies that would not justify paying the penalty; thus the penalty’s presence can substitute for the perception of common interests to be a sufficient reason for the listener to believe the speaker.

Experimental research demonstrates the predictive accuracy of these hypotheses relative to other common explanations of source credibility (53, 57). This research takes place in laboratories and in more realistic communication environments. In one set of laboratory experiments, subjects predict the outcomes of a series of hidden coin tosses (54). Subjects were told that they would be paid (typically 50 cents to $1) for each correct prediction. A control group made predictions with no further information. In treatment groups, a randomly selected subject (henceforth, “the speaker”) advised all other subjects about which prediction to make for a given coin toss. Multiple speaker and environmental attributes were varied across treatments. These variations included the probability that the speaker was paid when others made correct (or incorrect) predictions (i.e., the extent of common interests), the probability that the speaker would observe the outcome of the coin flip before offering advice (i.e., the extent of relative expertise), and other factors such as the existence and magnitude of penalties for lying, probabilistic verification threats, and the costs associated with sending signals to others. Across all treatments, and when analyzed with respect to prediction sequences over 10 independent coin flips (where there are 210 possible prediction sequences), the hypotheses listed above characterize subject responses to the speaker at levels that are not only far above the level expected by chance but also more accurate than other common explanations of source credibility. This result shows that perceptions of interest commonality and perceived relative expertise explain when and how subjects will follow the speaker’s advice.

A similar dynamic has been documented in less-controlled settings (9). In one experiment, 1,464 participants in a random digit-dialed telephone survey were exposed to randomly selected combinations of political commentators (e.g., Rush Limbaugh or Phil Donahue or no one) and issue positions (e.g., supporting or opposing expanded spending on prisons or no position). Subjects were then asked to state their own position on such issues and also asked to answer several questions about the commentator to whom they were exposed. Subject perceptions of the commentator’s interest commonality and relative expertise on the issue were the primary determinants of whether or not the subject’s issue position followed that of their randomly assigned speaker. Other factors commonly associated with persuasiveness in political settings, such as partisan identification or political ideology, had no significant explanatory power once perceived common interests and perceived expertise were accounted for. The fact that the opposite relation did not hold—perceived common interests and perceived relative expertise had significant associations, even after accounting for party or ideology—attests to the theorem’s primacy in explaining source credibility (53).

Collectively, such models and experiments yield a different understanding about source credibility than the previous empirical literature. As mentioned above, the literature on source credibility contains many published experiments that vary a single source attribute by a single variable. Over time, this literature has documented an expansive number of observations demonstrating correlations between observable speaker attributes, such as sex and height, and persuasion. The model and experiments just presented identify continuous, interactive and contingent logical relationships among these variables. Collectively, they show that various personal attributes matter only if they manifest as an indicator of the two basic source effects: perceived common interests and perceived relative knowledge. If the personal attribute in question, sex for example, is present in a context where an audience does not view sex as informative about interest commonality or relative expertise, sex will not increase credibility.

These findings imply that science communicators can establish source credibility by taking the time to relate their own interest in a scientific problem to that of their audience. An example of this strategy is found in the opening minutes of the Geoffrey Haines-Stiles–produced television program, Earth: The Operator’s Manual (58). Geologist Richard Alley is the program’s host. The program is an accessible and visually striking presentation about climate change’s causes and consequences.

In the program’s opening minutes, Alley describes his background and why he cares about his topic. This vignette is structured to establish Alley’s credibility, particularly among potentially skeptical audiences. In it, Alley reveals himself to have valuable expertise on the topic, as well as common interests with typically skeptical groups (58):

I’m a registered Republican, play soccer on Saturday, and go to church on Sundays. I’m a parent and a professor. I worry about jobs for my students and my daughter’s future. I’ve been a proud member of the U.N. Panel on Climate Change and I know the risks. I’ve worked for an oil company, and know how much we all need energy. And the best science shows we’ll be better off if we address the twin stories of climate change and energy. And that the sooner we move forward, the better.

Key moments in this introduction are Alley’s identification as a Republican and his description of himself as doing things that are associated with both environmental scientists and people who are often skeptical of such science (e.g., working for an oil company). Conveying such facts can help counter common stereotypes of environmental scientists as too partisan or too idealistic to convey climate science’s implications objectively. Consider, by contrast, a presentation on the same topic that does not relate its content to an audience’s core concerns and leaves the communicator’s motives a mystery. If the audience is not predisposed to believe the scientist, the presentation needs to give them another reason to do so. In such cases, actions such as Alley’s can help establish common interests.

The research on source credibility suggests that emphasizing common interests and relative expertise can help science communicators more effectively convey their findings in politicized environments. Credibility is particularly important when scientists can expect political entrepreneurs to try to reinterpret their words. These reinterpretations can come in the form of exaggeration, by entrepreneurs who want to use an inflated version of a finding to support a favored cause, or they can come in the form of relabeling disliked findings as a product of “junk science” (59, 60). The rules of political combat in reinterpreting scientific information in these ways is that the entrepreneur is not obligated to have read the underlying studies or even to have an accurate idea of what the research in question really does.

In cases where entrepreneurs attempt to reinterpret scientific information, how do prospective audiences choose which version of events to believe? Scientists who can demonstrate that they share important interests with their audience, and who have conducted themselves in ways that audiences correlate with expertise (e.g., demonstrating that she or he has conducted the research process in a transparent and replicable manner; being able to demonstrate that she or he has used similar methods to produce actionable and reliable findings in the past) can give audiences a reason to believe their explanations rather than those of entrepreneurial reinterpreters who seek to mislead. Scientists who proceed in this manner, even if charged by entrepreneurs as promulgating “junk science,” can increase the probability that audiences who want their beliefs to be consistent with scientific knowledge will see them as credible information sources.

Conclusion

Research on attention and source credibility clarifies how people react to presentations of scientific information. Focal themes in this research show the value of understanding, and relating scientific findings to, a target audience’s existing concerns and beliefs. With such knowledge in hand, there is expanded potential for producing communicative outcomes that are more likely to help more audiences reconcile their beliefs and decisions with scientific knowledge. If we take the time to make presentations that produce relevant and credible new memories for our audiences, we can help them to replace false beliefs with knowledge that scientists have evaluated and validated. Our claims can be memorable and persuasive while staying true to the science that we have discovered.

Acknowledgments

I thank Logan S. Casey, Kristyn L. Karl, Spencer Piston, Timothy J. Ryan, and Christopher Skovron for research assistance and editorial suggestions; Erika Franklin Fowler, Elisabeth R. Gerber, and two anonymous referees for helpful editorial comments; and Baruch Fischhoff and Barbara Kline Pope for advice regarding the presentation upon which this article is based. A.L. is a Hal R. Varian Collegiate Professor of Political Science at the Institute for Social Research.

Footnotes

The author declares no conflict of interest.

This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “The Science of Science Communication,“ held May 21–22, 2012, at the National Academy of Sciences in Washington, DC. The complete program and audio files of most presentations are available on the NAS Web site at www.nasonline.org/science-communication.

This article is a PNAS Direct Submission. B.F. is a guest editor invited by the Editorial Board.

References

  • 1.Wynne B. Public engagement as a means of restoring public trust in science—Hitting the notes, but missing the music? Community Genet. 2006;9(3):211–220. doi: 10.1159/000092659. [DOI] [PubMed] [Google Scholar]
  • 2.Maddox J. Wilful public misunderstanding of genetics. Nature. 1993;364(6435):281. doi: 10.1038/364281a0. [DOI] [PubMed] [Google Scholar]
  • 3.Oreskes N, Conway EM. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press; 2008. [Google Scholar]
  • 4.Ward B. Scientists frustrated in climate change debate. 2011. Available at www.ft.com/intl/cms/s/0/82eac5ca-1a96-11e1-ae14-00144feabdc0.html#axzz28ZaBH7Dh. Accessed, October 6, 2012.
  • 5.The Pew Research Center for the People and the Press in collaboration with The American Association for the Advancement of Science Scientific achievements less prominent than a decade ago: Public praises science; Scientists fault public, media. 2009. Available at www.people-press.org/files/legacy-pdf/528.pdf. Accessed October 7, 2012.
  • 6.Horst M. Taking our own medicine: On an experiment in science communication. Sci Eng Ethics. 2011;17(4):801–815. doi: 10.1007/s11948-011-9306-y. [DOI] [PubMed] [Google Scholar]
  • 7.Crookes G, Schmidt RW. Motivation: Opening the research agenda. Lang Learn. 1991;41(4):469–512. [Google Scholar]
  • 8.Churchland PS, Sejnowski TJ. The Computational Brain. Cambridge, MA: MIT Press; 1992. [Google Scholar]
  • 9.Lupia A. In: Thinking About Political Psychology. Kuklinski JH, editor. New York: Cambridge Univ Press; 2002. pp. 51–88. [Google Scholar]
  • 10.Fishbein M, Ajzen I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley; 1975. [Google Scholar]
  • 11.Collins AM, Loftus EF. A spreading activation theory of semantic processing. Psychol Rev. 1975;82(6):407–428. [Google Scholar]
  • 12.Shanks DR. Learning: From association to cognition. Annu Rev Psychol. 2010;61:273–301. doi: 10.1146/annurev.psych.093008.100519. [DOI] [PubMed] [Google Scholar]
  • 13.Bjork RA. In: Attention and Performance XVII: Cognitive Regulation of Performance: Interaction of Theory and Application. Gopher D, Koriat A, editors. Cambridge, MA: MIT Press; 1999. pp. 435–459. [Google Scholar]
  • 14.Becker JT, Morris RG. Working memory(s) Brain Cogn. 1999;41(1):1–8. doi: 10.1006/brcg.1998.1092. [DOI] [PubMed] [Google Scholar]
  • 15.Pessoa L, Kastner S, Ungerleider LG. Attentional control of the processing of neural and emotional stimuli. Brain Res Cogn Brain Res. 2002;15(1):31–45. doi: 10.1016/s0926-6410(02)00214-8. [DOI] [PubMed] [Google Scholar]
  • 16.Miller GA. The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychol Rev. 1956;63(2):81–97. [PubMed] [Google Scholar]
  • 17.Baddeley A. Working memory: Theories, models, and controversies. Annu Rev Psychol. 2012;63:1–29. doi: 10.1146/annurev-psych-120710-100422. [DOI] [PubMed] [Google Scholar]
  • 18.Larkin J, McDermott J, Simon DP, Simon HA. Expert and novice performance in solving physics problems. Science. 1980;208(4450):1335–1342. doi: 10.1126/science.208.4450.1335. [DOI] [PubMed] [Google Scholar]
  • 19.Matthews G, Wells A. In: Handbook of Cognition and Emotion. Dalgleish T, Power MJ, editors. West Sussex, UK: John Wiley and Sons; 1999. pp. 171–192. [Google Scholar]
  • 20.Petersen SE, Posner MI. The attention system of the human brain: 20 years after. Annu Rev Neurosci. 2012;35:73–89. doi: 10.1146/annurev-neuro-062111-150525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ohman A, Flykt A, Esteves F. Emotion drives attention: Detecting the snake in the grass. J Exp Psychol Gen. 2001;130(3):466–478. doi: 10.1037//0096-3445.130.3.466. [DOI] [PubMed] [Google Scholar]
  • 22.Berridge KC. Motivation concepts in behavioral neuroscience. Physiol Behav. 2004;81(2):179–209. doi: 10.1016/j.physbeh.2004.02.004. [DOI] [PubMed] [Google Scholar]
  • 23.Kahan DM, Jenkins-Smith H, Braman D. Cultural cognition of scientific consensus. J Risk Res. 2010;14(2):147–174. [Google Scholar]
  • 24.Andreasen AR. Marketing Social Change: Changing Behavior to Promote Health, Social Development, and the Environment. New York: Jossey-Bass; 1995. [Google Scholar]
  • 25.Tebaldi C, Strauss BH, Zervas CE. Modeling sea level rise impacts on storm surges along US coasts. Environ Res Lett. 2012;7(1):014032. [Google Scholar]
  • 26.Spence A, Pidgeon N. Framing and communicating climate change: The effects of distance and outcome frame impressions. Glob Environ Change. 2010;20(4):656–667. [Google Scholar]
  • 27. Freedman A (2012) Senate hearing focuses on threat of sea level rise. Available at http://sealevel.climatecentral.org/news/senate-climate-change-hearing-focuses-on-sea-level-rise/. Accessed April 17, 2013.
  • 28. Gillis J (2012) Sea level rises seen as threat to 3.7 million. New York Times, pp A1. Available at www.nytimes.com/2012/03/14/science/earth/study-rising-sea-levels-a-risk-to-coastal-states.html.
  • 29.Pronin E, Gilovich T, Ross L. Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychol Rev. 2004;111(3):781–799. doi: 10.1037/0033-295X.111.3.781. [DOI] [PubMed] [Google Scholar]
  • 30.Frischen A, Bayliss AP, Tipper SP. Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychol Bull. 2007;133(4):694–724. doi: 10.1037/0033-2909.133.4.694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Klein JT, Shepherd SV, Platt ML. Social attention and the brain. Curr Biol. 2009;19(20):R958–R962. doi: 10.1016/j.cub.2009.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Keysar B, Henly AS. Speakers’ overestimation of their effectiveness. Psychol Sci. 2002;13(3):207–212. doi: 10.1111/1467-9280.00439. [DOI] [PubMed] [Google Scholar]
  • 33.van Boven L, Kruger J, Savitsky K, Gilovitch T. When social worlds collide: Overconfidence in the multiple audience problem. Pers Soc Psychol Bull. 2000;26(5):619–628. [Google Scholar]
  • 34.Johnson DDP, Fowler JH. The evolution of overconfidence. Nature. 2011;477(7364):317–320. doi: 10.1038/nature10384. [DOI] [PubMed] [Google Scholar]
  • 35.Sharot T. The Optimism Bias: A Tour of the Irrationally Positive Brain. New York: Pantheon; 2011. [Google Scholar]
  • 36.Craik FIM, Lockhart RS. Levels of processing: A framework for memory research. J Verbal Learn Verbal Behav. 1972;11(6):671–684. [Google Scholar]
  • 37.Posner MI, Rothbart MK. In: Foundations of Social Neuroscience. Cacioppo JT, et al., editors. Cambridge MA: MIT Press; 2002. pp. 215–234. [Google Scholar]
  • 38.Krohne HW. In: Attention and Avoidance: Strategies in Coping with Aversiveness. Krohne HW, editor. Ashland OH: Hogrefe and Huber; 1993. pp. 19–50. [Google Scholar]
  • 39.Green MC, Brock TC. The role of transportation in the persuasiveness of public narratives. J Pers Soc Psychol. 2000;79(5):701–721. doi: 10.1037//0022-3514.79.5.701. [DOI] [PubMed] [Google Scholar]
  • 40.Druckman JN, Bolsen T. Framing, motivated reasoning, and opinions about emerging technologies. J Commun. 2011;61(4):658–688. [Google Scholar]
  • 41.Lupia A. Evaluating political science research: Information for buyers and sellers. PS: Pol Sci and Politics. 2000;33(1):7–13. [Google Scholar]
  • 42.Basu K. Child labor: Cause, consequence, and cure, with remarks on international labor standards. J Econ Lit. 1999;37(3):1083–1119. [Google Scholar]
  • 43.Moe TM. Political institutions: The neglected side of the story. J Law Econ Organ. 1990;6(6):213–253. [Google Scholar]
  • 44.Harnad S. The symbol grounding problem. Physica D. 1990;42(1–3):335–346. [Google Scholar]
  • 45.Fauconnier G, Turner M. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books; 2002. [Google Scholar]
  • 46.Daft RL, Weick KE. Toward a model of organizations as interpretation systems. Acad Manage Rev. 1984;9(2):284–295. [Google Scholar]
  • 47.Zimmer B. Who first put “Lipstick on a Pig? Origins of the porcine proverb. 2008. Available at www.slate.com/articles/news_and_politics/explainer/2008/09/who_first_put_lipstick_on_a_pig.html. Accessed July 9, 2012.
  • 48.Cheney R. Vice President’s remarks in Los Lunas, New Mexico. 2004. Available at http://georgewbush-whitehouse.archives.gov/news/releases/2004/10/20041031-7.html. Accessed July 9, 2012.
  • 49.Chozick A. Obama puts different twist on lipstick. 2008. Available at http://blogs.wsj.com/washwire/2008/09/09/obama-attacks-gop-tickets-mantra-of-change/. Accessed July 9, 2012.
  • 50.Palin S. Palin’s speech at the Republican convention. 2008. Available at http://elections.nytimes.com/2008/president/conventions/videos/transcripts/20080903_PALIN_SPEECH.html. Accessed July 9, 2012.
  • 51.Applegate A. Rep. Drake criticizes Obama for ‘lipstick on a pig’ remark. 2008. Available at http://hamptonroads.com/2008/09/rep-drake-criticizes-obama-lipstick-pig-remark. Accessed October 8, 2012.
  • 52.Haslam N, Loughnan S, Sun P. Beastly: What makes animal metaphors offensive? J Lang Soc Psychol. 2011;30(3):311–325. [Google Scholar]
  • 53.Lupia A, McCubbins MD. The Democratic Dilemma: Can Citizens Learn What They Need to Know. New York: Cambridge Univ Press; 1998. [Google Scholar]
  • 54.Pornpitakpan C. The persuasiveness of source credibility: A critical review of five decades evidence. J Appl Soc Psychol. 2004;34(2):243–281. [Google Scholar]
  • 55.Sobel J. In: Encyclopedia of Complexity and System Science. Meyers R, editor. New York: Springer; 2009. pp. 8125–8139. [Google Scholar]
  • 56.Arceneaux K. Can partisan cues diminish democratic accountability? Polit Behav. 2008;30(2):139–160. [Google Scholar]
  • 57.Boudreau C. Closing the gap: When do cues eliminate differences between sophisticated and unsophisticated citizens? J Polit. 2009;71(3):964–976. [Google Scholar]
  • 58. Passport to Knowledge/Geoff Haines-Stiles Productions, Inc. (2011) Earth: The Operators' Manual [television program]. Available at www.earththeoperatorsmanual.com. Accessed July 9, 2012.
  • 59.Neff RA, Goldman LR. Regulatory parallels to Daubert: Stakeholder influence, “sound science,” and the delayed adoption of health-protective standards. Am J Public Health. 2005;95(Suppl 1):S81–S91. doi: 10.2105/AJPH.2004.044818. [DOI] [PubMed] [Google Scholar]
  • 60.Pielke RA., Jr When scientists politicize science: Making sense of controversy over The Skeptical Environmentalist. Environ Sci Policy. 2004;7(5):405–417. [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES