Skip to main content
Oxford University Press logoLink to Oxford University Press
. 2018 Aug 28;68(5):994–997. doi: 10.1093/joc/jqy044

Perceived Message Effectiveness Meets the Requirements of a Reliable, Valid, and Efficient Measure of Persuasiveness

Joseph N Cappella 1,2,
PMCID: PMC6241506  PMID: 30479403

In every science, the measurement of core quantities requires valid tools: usually a set of procedures or operations. Measurement procedures may vary considerably for assessing the same core attribute. For example, the attribute of physical distance (or length) can be measured in many ways, including the familiar micrometer, ruler, and tape measure, but also the less familiar infrared Helium–Xenon laser interferometry, X-raying opaque materials, Gunter’s chain for surveying (circa 1620), radio navigation using transponders, and rangefinders (as deployed in World War II), among many other techniques.

All these tools measure the distance between two points in space comparing the measured distance to some established standard to obtain length. Some procedures are very precise; some expensive; some cheap and easy; and some are designed for specific applications and are necessarily inappropriate for other applications. All of them need to meet core criteria that we might identify as reliability and validity. Using a ruler to measure very large or very small distances will show the ruler to be imprecise. Using laser-based optical techniques to carry out simple measurements of a person’s height will be very precise, but very expensive and resource intensive.

The bottom line is obvious: we need measurement tools (i.e., procedures) that are reliable (consistent), valid (accurate), and efficient (precise enough) for the task at hand. Denying that a ruler is a good measure of distance because it cannot determine the diameter of the nucleus of uranium 238 is silly, because it is very useful in a wide variety of other tasks where its reliability, validity, and low resource consumption are clear. So it is with perceived message effectiveness (PME) and its close cousins, such as perceived argument strength (PAS).

O’Keefe’s (2018) conclusions about PME lead us astray. PME is a useful tool for the measurement of how persuasive a message can be when applied to the right target population, a large set of messages, or messages that vary in their characteristics and in PME. Our lab has generated evidence of PME’s utility, most of which was not included in O’Keefe’s (2018) meta-analysis.1

One group of studies uses PME (or PAS) to sort the persuasive impact of messages, but tests their impact on biobehavioral outcomes rather than self-reported psychosocial outcomes. The second group uses sets of randomly-assigned messages that vary in their aggregate PME, showing causal impact on behavioral-oriented outcomes.

Perceived Message Effectiveness Predicts Biobehavioral Outcomes

Messages varying in PME (or PAS) have had predictable main effects on heart rate and skin conductance (Kang, Cappella, Strasser, & Lerman, 2009; Strasser et al., 2009), cotinine levels (Wang et al., 2013), neural activation (Weber, Huskey, Mangus, Westcott-Baker, & Turner, 2014), and genetic differences and brain reactivity (Falcone et al., 2011). Similarly, messages varying in PME (or PAS) have had predictable moderating effects on visual attention (Sanders-Jackson et al., 2011), objective memory (Lee & Cappella, 2013), and physiological responding (Kang et al., 2009). In all these studies, messages were selected in advance from a large pool using PME and PAS measures on separate but parallel populations; messages so selected were tested subsequently in the targeted population.

Aggregate Perceived Message Effectiveness Predicts Behavioral Outcomes

Bigsby, Cappella, and Seitz (2013) exposed smokers to four anti-smoking ads randomly selected from a large pool, de facto creating variable amounts of “persuasive impact” as measured by aggregate scores of PME. The degree of aggregate PME predicted intentions to quit and to reduce consumption of tobacco. A similar procedure was conducted with exposure to a random selection of tobacco warning labels and intentions to quit (Morgan, Sutton, Yang, & Cappella, 2018). In a third study, anti-smoking PSAs tailored to be high in PME for the individual smoker predicted intentions and subsequent quitting behaviors (Kim, Yang, Kim, & Cappella, 2017).

Conclusion

These studies indicate that PME (and PAS) are valid indicators of effective messages, helpful in guiding the selection of messages for theory-testing and campaign implementation. Researchers can use PME (and PAS) to sort messages that vary sufficiently in objective attributes and perceived effectiveness. Trying to differentiate infinitesimal differences between messages with a blunt (but efficient) instrument is a fool’s errand. But so is trying to differentiate the effectiveness of many different messages, one message at a time, using gold standards requiring large samples and long-term behavioral outcomes. Let’s not promote either kind of foolishness.

Communication research into message effects must have a tool like PME (and PAS), because scientific research needs efficient shortcuts that are reliable and predictively valid. Just as physical scientists have developed a variety of procedures to assess length and would not employ the most precise procedure every time distance is to be measured, so it is with PME. These measurement tools do not need to be infinitely precise, and should not be deployed as if they are. PME measures use resources efficiently, allowing the evaluation of many message implementations before more careful (and expensive) testing is carried out.

Acknowledgments

Grant P50 CA179546-03 from the National Cancer Institute supported time spent writing this paper. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

1

The Bigsby, Cappella, & Seitz (2013) article is included in O’Keefe’s reference bibliography

References

  1. Bigsby E., Cappella J. N., & Seitz H. H. (2013). Efficiently and effectively evaluating public service announcements: Additional evidence for the utility of perceived effectiveness. Communication Monographs, 80(1), 1–23. doi:10.1080/03637751.2012.739706 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Falcone M., Jepson C., Sanborn P., Cappella J. N., Lerman C., & Strasser A. A. (2011). Association of BDNF and COMT genotypes with cognitive processing of anti‐smoking psas. Genes, Brain & Behavior, 10(8), 862–867. doi:10.1111/j.1601-183x.2011.00726.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Kang Y., Cappella J. N., Strasser A., & Lerman C. (2009). The effect of smoking cues in antismoking advertisements on smoking urge and psychophysiological reactions. Nicotine and Tobacco Research, 11(3), 254–261. doi:10.1093/ntr/ntn033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Kim H. S., Yang S., Kim M., & Cappella J. N. (2017, June). Assessing the effectiveness of recommendation algorithms for health message design: An experiment. Paper presented at the International Conference on Computational Social Science, Evanston, IL.
  5. Lee S., & Cappella J. N. (2013). Distraction effects of smoking cues in antismoking messages: Examining resource allocation to message processing as a function of smoking cues and argument strength. Media Psychology, 16(2), 154–176. doi:10.1080/15213269.2012.755454 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Morgan J. C., Sutton J. A., Yang S., & Cappella J. N. (2018). Impact of graphic warning labels on intentions to use alternate tobacco products. Manuscript submitted for publication. [DOI] [PMC free article] [PubMed]
  7. O’Keefe D. J. (2018). Message pretesting using assessments of expected or perceived persuasiveness: Evidence about diagnosticity of relative actual persuasiveness. Journal of Communication, 68, 120–142. doi:10.1093/joc/jqx009 [Google Scholar]
  8. Sanders-Jackson A. N., Cappella J. N., Linebarger D. L., Piotrowski J., O’Keeffe M., & Strasser A. A. (2011). Visual attention to anti-smoking PSAs: Smoking cues versus other attention-grabbing features. Human Communication Research, 37, 275–292. doi:10.1111/j.1468-2958.2010.01402.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Strasser A. A., Cappella J. N., Jepson C., Fishbein M., Tang K. Z., Han E., & Lerman C. (2009). Experimental evaluation of antitobacco PSAs: Effects of message content and format on physiological and behavioral outcomes. Nicotine & Tobacco Research, 11(3), 293–302. doi:10.1093/ntr/ntn026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Wang A. L., Loughead J. W., Strasser A. A., Ruparel K., Romer D. R., Blady S. J., ... Langleben D. D. (2013). Content matters: Neuroimaging investigation of brain and behavioral impact of televised anti-tobacco public service announcements. Journal of Neuroscience, 33, 7420–7427. doi:10.1523/JNEUROSCI.3840-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Weber R., Huskey R., Mangus J. M., Westcott-Baker A., & Turner B. O. (2014). Neural predictors of message effectiveness during counterarguing in antidrug campaigns. Communication Monographs, 82(1), 4–30. doi:10.1080/03637751.2014.971414 [Google Scholar]

Articles from The Journal of Communication are provided here courtesy of Oxford University Press

RESOURCES