Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2020 Jul 8.
Published in final edited form as: Soc Sci Med. 2014 Aug 28;120:1–11. doi: 10.1016/j.socscimed.2014.08.039

Effects of Comparative Claims in Prescription Drug Direct-to-Consumer Advertising on Consumer Perceptions and Recall

Amie C O’Donoghue 1, Pamela A Williams 2, Helen W Sullivan 1, Vanessa Boudewyns 2, Claudia Squire 2, Jessica F Willoughby 2
PMCID: PMC7342488  NIHMSID: NIHMS1604251  PMID: 25194471

Abstract

Although pharmaceutical companies cannot make comparative claims in direct-to-consumer (DTC) ads for prescription drugs without substantial evidence, the U.S. Food and Drug Administration permits some comparisons based on labeled attributes of the drug, such as dosing. Researchers have examined comparative advertising for packaged goods; however, scant research has examined comparative DTC advertising. We conducted two studies to determine if comparative claims in DTC ads influence consumers’ perceptions and recall of drug information. In Experiment 1, participants with osteoarthritis (n = 1,934) viewed a fictitious print or video DTC ad that had no comparative claim or made an efficacy comparison to a named or unnamed competitor. Participants who viewed print (but not video) ads with named competitors had greater efficacy and lower risk perceptions than participants who viewed unnamed competitor and noncomparative ads. In Experiment 2, participants with high cholesterol or high body mass index (n = 5,317) viewed a fictitious print or video DTC ad that had no comparative claim or made a comparison to a named or unnamed competitor. We varied the type of comparison (of indication, dosing, or mechanism of action) and whether the comparison was accompanied by a visual depiction. Participants who viewed print and video ads with named competitors had greater efficacy perceptions than participants who viewed unnamed competitor and noncomparative ads. Unlike Experiment 1, named competitors in print ads resulted in higher risk perceptions than unnamed competitors. In video ads, participants who saw an indication comparison had greater benefit recall than participants who saw dosing or mechanism of action comparisons. In addition, visual depictions of the comparison decreased risk recall for video ads. Overall, the results suggest that comparative claims in DTC ads could mislead consumers about a drug’s efficacy and risk; therefore, caution should be used when presenting comparative claims in DTC ads.

Keywords: United States, direct-to-consumer (DTC) advertisements, prescription drugs, marketing, comparative advertising, perceived risk, communication


U.S. consumers spent about $263 billion on prescription drugs in 2010, and spending is projected to double over the next decade (Centers for Medicare and Medicaid Services, 2011). As new drugs enter the market, consumers and their healthcare professionals often choose among several drugs that treat the same medical condition. Consequently, information about how drugs compare to one another may play a pivotal role in prescription drug decisions. Not surprisingly, there have been increasing calls for comparative medical information. For example, as part of the American Recovery and Reinvestment Act of 2009, the Agency for Healthcare Research and Quality funded a set of CHOICE (Clinical and Health Outcomes Initiative in Comparative Effectiveness) studies designed to explore comparative effectiveness. In light of these efforts, it is important to explore how consumers process comparative claims about prescription drugs in direct-to-consumer (DTC) ads

Advertising research focused on the effects of comparative advertising on consumer attitudes—including attitudes toward the ad, the brand, and product use—has produced mixed results (Ang & Leong, 1994; Demirdjian, 1983; Grewal, Kavanoor, Fern, Costly, & Barnes, 1997). Research on the superiority of comparative versus noncomparative ads on purchase intentions, however, has been more conclusive. Relative to noncomparative ads, comparative ads were shown to result in greater purchase intentions (Ang & Leong, 1994; Demirdjian, 1983; Grewal et al., 1997; Miniard, Barone, Rose, & Manning, 1994).

Importantly, research suggests that comparative claims can be misleading even if they are not blatantly false (Xie & Boush, 2011). As a natural part of text comprehension, individuals routinely make inferences, going beyond what is directly stated in the text (Harris, 1981). Harris, Dubitsky, and Bruno (1983) found that consumers then believe such inferences were directly stated in the ad. Mitra, Swasy, and Aikin (2006) found that market leadership claims imply a more effective product, even when supporting data are not provided. In addition, Burke (1988) found that “speed of relief” claims in ads for ibuprofen-based drugs (e.g., “Brand X relieves headache pain as fast as aspirin”) were positively associated with beliefs about the advertised brand’s effectiveness as a pain reliever.

Despite extensive research on comparative advertising of consumer products, a limited number of studies have explored how DTC ads could help consumers compare drugs (Hauber, Mohamed, Johnson, & Falvey, 2009; Woloshin, Schwartz, & Welch, 2009). And aside from the few studies mentioned above, little research has examined comparative DTC ads themselves (Mitra, et al., 2006). Consequently, it is unclear whether these findings are applicable to comparative prescription drug ads or how such claims influence consumers’ perceived efficacy of advertised drugs. Nonetheless, these findings illustrate the potential for comparative DTC ads to influence perceptions of the benefits and risk of advertised drugs, possibly leading consumers to magnify benefit and minimize risk. In fact, previous DTC research has consistently found that various DTC advertising strategies that increase perceptions of benefit also decrease perceptions of risk (Bowman, 2002; Cohen, Ferrell, & Johnson, 2002; Davis, 2007; Hoek, 2008; Kees, Bone, Kozup, & Ellen, 2008). Consequently, a comparative claim that focuses on the superior effectiveness of a drug may also influence risk perceptions. In sum, comparative DTC ads may give consumers more information with which to make informed decisions. However, such ads may impede informed decisions if they mislead consumers about the characteristics of the product.

Although pharmaceutical companies cannot make comparative claims in DTC ads for prescription drugs without substantial evidence, the U.S. Food and Drug Administration (FDA) permits some comparisons based on labeled attributes of the drug, such as dosing (Applications, 2008). For example, prescription birth control ads often emphasize their mode of administration or different indications beyond pregnancy prevention, such as acne control. Some anti-depressant ads emphasize dual indications for depression and chronic pain control. Because few head-to-head clinical trials have been conducted, few DTC ads include direct efficacy-based comparisons (Mitra et al., 2006).

Nonetheless, because comparative claims can form impressions without direct efficacy comparisons, the present study aims to investigate directly how consumers interpret and react to a variety of DTC comparative drug ads. Because DTC ads are frequently seen in print and television formats and these formats vary on a number of dimensions, including the viewers’ ability to self-pace exposure, we examined both print and video ads.

Specifically, in two studies, we explored two types of drug comparisons in DTC ads: (1) drug efficacy comparisons and (2) other evidence-based comparisons, such as dosing, mechanism of action, and indication. We also examined whether naming the competitor and emphasizing the comparison with a visual depiction influences efficacy and risk perceptions and recall. We hypothesized that comparative ads would lead to higher perceived efficacy and lower perceived risk than noncomparative ads. We hypothesized that named comparisons would lead to higher perceived efficacy and lower perceived risk than unnamed comparisons. We also explored whether comparative ads would affect benefit and/or risk recall relative to noncomparative ads, and whether named comparisons would lead to greater benefit and/or risk recall than unnamed comparisons.

Experiment 1: Efficacy Comparisons in DTC Ads

Sample

To increase the likelihood that participants would attend to a DTC ad for a particular drug, we selected a sample of participants with self-reported osteoarthritis to view an ad for a fictitious osteoarthritis prescription drug. Participants (n = 1,934) were recruited from a nationally representative online consumer panel of U.S. adults. E-mail invitations were sent to a random sample of panel members who reported at pre-screening that they had osteoarthritis. One email reminder was sent after four days if no response had been received. See Table 1 for demographic information.

Table 1.

Summary of Study Experiments and Demographic Characteristics of Completed Participants

Experiment 1 Experiment 2
Study design 2 (comparison label: named, unnamed) + 1 (control group) 2 (comparison label: named, unnamed) × 3 (comparison theme: indication, dosing, mechanism of action) × 2 (comparison visual: visual, no visual) + 1 (control group)
Response rate 57.5% (1,934 participants out of 3,365 invited) 53.6% (5,317 participants out of 9,914 invited)
Sample U.S. adults with osteoarthritis U.S. adults with high cholesterol or high body mass index (BMI or 25+)
Sex
 Male 532 (27.5%) 2,808 (52.8%)
 Female 1,402 (72.5%) 2,509 (47.2%)
Age
 18–24 7 (0.4%) 77 (1.5%)
 25–34 27 (1.4%) 320 (6.0%)
 35–44 63 (3.3%) 537 (10.1%)
 45–54 246 (12.7%) 1,016 (19.1%)
 55–64 656 (33.9%) 1,532 (28.8%)
 65–74 671 (34.7%) 1,351 (25.4%)
 75+ 264 (13.7%) 484 (9.1%)
Race/Ethnicity
 White, Non-Hispanic 1,698 (87.8%) 4,438 (83.5%)
 Black, Non-Hispanic 96 (5.0%) 330 (6.2%)
 Other, Non-Hispanic 27 (1.4%) 110 (2.1%)
 Hispanic 57 (3.0%) 301 (5.7%)
 2+ Races, Non-Hispanic 56 (2.9%) 138 (2.7%)
Education
 Less than high school diploma 45 (2.3%) 138 (2.6%)
 High school diploma or equivalent 288 (14.9%) 959 (18.0%)
 Some college 728 (37.6%) 1,785 (33.6%)
 Bachelor’s degree or higher 873 (45.1%) 2,435 (45.8%)
Household Income
 Less than $25,000 304(15.7%) 552 (10.4%)
 $25,000–$49,999 528 (27.3%) 1,324 (24.9%)
 $50,000–$74,999 481 (24.9%) 1,217 (22.9%)
 $75,000–$99,999 305 (15.8%) 977 (18.4%)
 $100,000–$174,999 258 (13.3%) 1,039 (19.5%)
 $175,000+ 58 (3.0%) 208 (3.9%)

Note. All participants were recruited from an existing online consumer probability-based panel representative of the U.S. adult population.

Method

Design.

We conducted a randomized experiment with print and video ads. For each type of ad, we manipulated the type of comparison label (named, unnamed) and included a control group (Figure 1). The video ad control condition was designed to include no competitor information but an error occurred in which comparative superimposed text inadvertently appeared, thus invalidating this condition. We did not examine findings from this condition.

Figure 1.

Figure 1.

Experimental conditions for the two experiments. Note that arm 6 of Experiment 1 was not analyzed due to a stimuli creation error. For Experiment 2, an identical design was conducted in print and television ads, leading to 26 conditions total.

Print stimuli.

We created a print ad for a fictitious osteoarthritis prescription drug, Kesterin, and tailored it for the experimental conditions (see Figure 2). In all conditions, the ads presented identical information, except for the presence or absence of the comparative language. Participants in the named competitor condition saw an ad that compared the efficacy of Kesterin with an actual osteoarthritis prescription drug currently on the market (“Kesterin is more effective than Celebrex in controlling the pain and inflammation of osteoarthritis;” use of brand name comparisons does not imply endorsement on behalf of FDA). In the unnamed competitor condition, Kesterin was compared with “other prescription drugs.” Participants in the control condition saw the ad without a comparison. The first page of the print ad contained information about what the drug is for, risk information, and the experimental manipulation (presented in a text box to call attention to the comparison). The second page contained the legally required brief summary of the drug’s risk and benefits.

Figure 2.

Figure 2.

The control condition (left) and unnamed competitor condition (right) ads for Experiment 1. Not shown is the named competitor ad, in which “Celebrex” replaces “other prescription drugs.”

Video stimuli.

The 60-second video ad for Kesterin contained two on-screen actors, one narrator, and on-screen superimposed text. To maintain consistency with the print ad, we used the same actors and visual setting. Consistent with current DTC television ads, the ad contained information about what condition the drug treats and a statement of important risks. Similar to the print ad, named and unnamed competitor conditions differed by whether the narrator referred to Celebrex by name (i.e., named competitor group) or simply compared Kesterin with “other prescription drugs” (i.e., unnamed competitor group).

Procedure

All invited participants were randomly assigned to 1 of 6 experimental conditions (Figure 1) by the project statistician (using a computer-generated randomization command in SAS). Participants who viewed print ads could navigate back and forth between the pages of the ad without time restrictions. Participants who viewed video ads were required to view the ad twice in succession to ensure they were sufficiently exposed to it. After viewing the ads, participants completed a brief questionnaire. Participants received points for participating in a study, which could later be exchanged for vouchers and gifts at a partner network. Both experiments were conducted between December 2012 and January 2013 and were approved by FDA’s Research Involving Human Subjects Committee and RTI’s Institutional Review Board

Outcome measures.

Perceived efficacy.

Perceived efficacy was measured in three ways: efficacy magnitude (“Based on the information in the ad, how much would Kesterin help your osteoarthritis pain?”; 1= very little, 7= a lot), efficacy likelihood (“In your opinion, if 100 people take Kesterin, for how many will the drug work?”; open-ended), and comparative efficacy (“Compared to [Celebrex/other prescription drugs], what does the ad tell you about how well Kesterin works?”; 1= much worse, 7= much better).

Perceived risk.

Perceived risk was measured in three ways: risk magnitude (“If Kesterin did cause a person to have side effects, how serious do you think they would be?”; 1= not at all serious, 7 =very serious), risk likelihood (“In your opinion, if 100 people take Kesterin, how many will have any side effects?”; open-ended), and comparative risk (“Compared to [Celebrex/other prescription drugs] what does the ad tell you about how risky or safe Kesterin is?”; 1= much safer, 7= much riskier).

Recall.

Benefit recall was measured by asking participants an open-ended question: “What did the ad say are the benefits of Kesterin?” Risk recall was measured by asking participants an open-ended question: “According to the ad, what are the side effects of Kesterin?” Responses were then coded independently to identify whether the recalled benefits and risks had been mentioned in the ad or not. Kappa calculations (Cohen, 1960) showed excellent reliability between coders for both benefit (κ = .94, print; κ = .86, video) and risk recall (κ = .84, print; κ = .94, video). Discrepancies between coders were identified and resolved by a third party. The number of correct benefits and risks that participants recalled was then calculated. All recalled benefits were summed to create an overall benefit recall score. Similarly, the recalled risks were summed to create an overall risk recall score. To standardize the measures, we rescaled the means to represent the percentage of correct items (ranging from 0–100).

Analysis.

For the print ads, we examined one-way ANOVAs to test whether dependent variables differed by comparison label (control [noncomparative], named competitor, unnamed competitor). Using a Bonferroni-adjusted p-value of .017 (.05/3), effect size and direction of significant main effects were confirmed by planned comparisons between the unnamed competitor condition versus the control, the named competitor condition versus the control, and the unnamed competitor condition versus the named competitor condition. In the video ads, we examined one-way ANOVAs to test whether dependent variables differed by comparison label (named competitor, unnamed competitor), excluding the control condition due to the error mentioned above. All Likert scales were treated as interval rather than ordinal data. We conducted all analyses with weighted data using SUDAAN 11.0 to ensure that the data more accurately represented the target population and to account for nonresponse, noncoverage, underrepresentation of minority groups, and other types of sampling and survey errors. We report Cohen’s d as a measure of effect size. To aid in this interpretation of magnitude of differences, Cohen (1969) has provided guidelines for interpreting the magnitude of d, namely, d = 0.2 is a small effect, 0.5 is a medium effect, and 0.8 is a large effect.

Results

Tables 2 and 3 present the weighted means and standard errors for dependent variables by experimental group.

Table 2.

Experiment 1, Print: Efficacy Perceptions, Risk Perceptions, and Recall: Weighted Means and Standard Errors

Recall – Percent Correct Perceived Efficacy Perceived Risk
Benefit (0–100 scale) Risk (0–100 scale) Comparative Efficacy (1–7 scale) Noncomparative Comparative Risk (1–7 scale) Noncomparative
Efficacy Magnitude (1–7 scale) Efficacy Likelihood (0–100 scale) Risk Magnitude (1–7 scale) Risk Likelihood (0–100 scale)
Comparison label
 Named (n = 344) 35.51 (1.37) 10.38 (0.62) 5.57 (0.09) 4.81 (0.14) 59.96 (2.01) 4.22 (0.09) 5.46 (0.11) 34.90 (2.17)
 Unnamed (n = 349) 32.63 (1.46) 10.14 (0.61) 5.07 (0.12)* 4.80 (0.13) 60.37 (1.65) 4.62 (0.11)* 5.54 (0.11) 35.66 (2.16)
 Control (n = 324) 34.06 (1.24) 10.48 (0.55) 4.97 (0.10)* 4.93 (0.12) 63.57 (1.94) 4.75 (0.10)* 5.74 (0.10) 34.33 (1.87)
*

Significantly different from the named comparison condition within each column. Benefit and risk recall are reported as the percent correct, from 0 (none correct) to 100 (all correct). Comparative efficacy was assessed on a scale of 1 = much worse [than other drugs] to 7 = much better [than other drugs]. Efficacy magnitude was assessed on a scale of 1 = [would help] very little to 7 = [would help] a lot. Efficacy likelihood was assessed on a scale of 0 (Kesterin will work for none of the people taking it) to 100 (Kesterin will work for all of the people taking it).Comparative risk was assessed on a scale of 1 = much safer [than other drugs] to 7 = much riskier [than other drugs]. Risk magnitude was assessed on a scale of 1 =[side effects are] not at all serious to 7 = [side effects are] very serious. Risk likelihood was assessed on a scale of 0 (no people taking Kesterin will have side effects) to 100 (all people taking Kesterin will have side effects).

Table 3.

Experiment 1, Video: Efficacy Perceptions, Risk Perceptions, and Recall: Weighted Means and Standard Errors

Recall – Percent Correct Perceived Efficacy Perceived Risk
Benefit (0–100 scale) Risk (0–100 scale) Comparative Efficacy (1–7 scale) Noncomparative Comparative Risk (1–7 scale) Noncomparative
Efficacy Magnitude (1–7 scale) Efficacy Likelihood (0–100 scale) Risk Magnitude (1–7 scale) Risk Likelihood (0–100 scale)
Comparison label
 Named (n = 310) 33.11 (1.57) 11.94 (0.83) 5.64 (0.10) 4.92 (0.11) 62.45 (1.69) 3.98 (0.10) 4.90 (0.11) 29.72 (1.55)
 Unnamed (n = 306) 36.32 (1.35) 12.90 (0.79) 5.43 (0.09) 4.97 (0.12) 60.62 (1.68) 3.94 (0.09) 4.69 (0.13) 30.11 (1.73)

Note. Benefit and risk recall are reported as the percent correct, from 0 (none correct) to 100 (all correct). Comparative efficacy was assessed on a scale of 1 = much worse [than other drugs] to 7 = much better [than other drugs]. Efficacy magnitude was assessed on a scale of 1 = [would help] very little to 7 = [would help] a lot. Efficacy likelihood was assessed on a scale of 0 (Kesterin will work for none of the people taking it) to 100 (Kesterin will work for all of the people taking it).Comparative risk was assessed on a scale of 1 = much safer [than other drugs] to 7 = much riskier [than other drugs]. Risk magnitude was assessed on a scale of 1 =[side effects are] not at all serious to 7 = [side effects are] very serious. Risk likelihood was assessed on a scale of 0 (no people taking Kesterin will have side effects) to 100 (all people taking Kesterin will have side effects).

One-way ANOVAs revealed that comparative efficacy, F (2, 1007) = 11.26, p < .001, and comparative risk, F (2, 1004) = 7.72, p < .001, differed significantly across the three comparison label conditions in the print ad conditions. Planned comparisons revealed that comparative efficacy was significantly greater in the named competitor group compared with the control group (d = .20, p < .001) and unnamed competitor group (d = .11, p < .001). The comparison between the unnamed competitor group and the control group was not statistically significant, p = .52. Planned comparisons for comparative risk indicated that participants’ perceptions of comparative risk were lower for those in the named competitor group compared with the control group (d = −.40, p < .001) and the unnamed competitor group (d = −.30, p = .006). Comparative risk between the unnamed competitor group and the control group did not significantly differ, p = .52.

No significant effects were found between the named and unnamed competitor groups in the video ad conditions, ps > .05.

Discussion

Experiment 1 assessed how an efficacy comparison (named or unnamed) in DTC print and video ads affected consumers’ perceptions of drug efficacy and risk, and recall. As hypothesized, when the print ad included a named competitor (versus an unnamed competitor or no comparison at all), participants reported that the advertised drug had greater efficacy and was less risky than other prescription drugs. Although higher efficacy perceptions are plausible, perhaps accurately reflecting the information provided in the ad, lower risk perceptions are unwarranted, as the DTC ads did not include information on the relative risk of the advertised drug. This finding is consistent with previous research that shows an inverse relationship between perceived benefit and perceived risk (Davis, 2007). This finding is also consistent with prior research demonstrating that comparisons need not be false to mislead (Burke, 1988; Mitra et al., 2006; Xie & Boush, 2011).

Although we did find differences in comparative efficacy and comparative risk perceptions between the groups in the print ad conditions, responses on other measures of perceived risk and efficacy did not differ between groups. This finding is consistent with previous studies showing that relative measures (subjective measures requiring consumer judgment about similarities or differences between brands) are more sensitive to comparative advertising effects than are nonrelative measures (objective measures of performance, devoid of a comparison; Grewal et al., 1997; Pechmann & Ratneshwar, 1991; Rose, Miniard, Barone, Manning, & Till, 1993; Snyder, 1992). However, it should be noted that these findings were not replicated in the video conditions, as we found no evidence that named versus unnamed comparisons differentially affected perceptions there.

We found, as expected, that direct efficacy comparisons in an ad for an osteoarthritis drug influenced comparative efficacy and risk perceptions, particularly if they were named. Experiment 2 was designed to evaluate efficacy and risk perceptions and recall for a cholesterol-lowering drug with various non-efficacy comparisons, extending the findings in several ways.

Experiment 2: Drug-Attribute Comparisons in DTC Ads

In Experiment 1, the ads claimed directly that the drug was more effective than other drugs. Experiment 2 examined non-efficacy comparisons in a different medical condition, again examining named and unnamed comparisons in print and video ads. This experiment investigated three attributes of drugs currently compared in some DTC prescription drug ads. First, indication-to-indication comparisons highlight the approved indications of the advertised and competitor drugs (e.g., “Unlike Crestor, which only treats high cholesterol, Plevoral treats high cholesterol and provides an appetite suppressant that helps patients lose weight”). Second, dosing comparisons compare the dosing schedule or characteristics of two drugs (e.g., “Unlike Crestor, which has to be taken by mouth every day, Plevoral is a skin patch that needs to be replaced just once a month”). Third, MOA comparisons involve differences in the way the two drugs work (e.g., “Unlike Crestor, Plevoral targets cholesterol in both your intestines and your liver”).

Although an additional indication, a different dosing method, or a different MOA could all be construed as benefits, they should not logically affect how well consumers think a drug works for its primary indication—or how risky it is. However, thoughts activated during processing of these comparisons may bias the processing of the noncomparative information presented in the ad (i.e., information about the drug’s efficacy and risk; Barone, Rose, Miniard, & Manning, 1999). Some evidence suggests that comparative claims about benefits, such as indication, dosing, or MOA, could stimulate a consumer to make misleading inferences about a product’s efficacy (Burke, 1988).

In addition to examining different comparisons, Experiment 2 explored the use of visuals accompanying these comparisons, such as a depiction of a patch versus pills. Research suggests that, just as comparative claims can influence recall by increasing attention to the ad (Pechmann & Stewart, 1990), the use of visuals can influence recall positively (Grossbart, Muehling, & Kangun, 1986), especially when the visuals depict product features mentioned in the ad (Malaviya, Kisielius, & Sternthal, 1996). However, because comprehending a visual requires complex processing (Scott, 1994), visual stimuli may compete for consumer attention; further, individuals have natural limitations on their ability to process multiple sources of information (Marois & Ivanoff, 2005), thereby decreasing recall. Moreover, visuals may have different effects in print versus video ads. Whereas consumers view static print ads in a self-paced manner, video ads have additional features (e.g., music, dynamic visual presentation) that are not paced by the consumer. Given these inconclusive findings, we included visuals to determine whether their presence would moderate the influence of the comparisons.

We tested the same hypotheses from Experiment 1 in Experiment 2, replacing the efficacy comparison with drug attribute comparisons. We also explored whether the presence of a visual depiction of the comparison would influence benefit perceptions, risk perceptions, and recall.

Sample

To increase the likelihood that participants would attend to a DTC ad for a particular drug, we selected a sample of participants with self-reported high cholesterol (65% of the sample) or high body mass index (BMI), no high cholesterol (35% of the sample) to view an ad for a fictitious high-cholesterol prescription drug. Participants (n = 5,317) were recruited from a nationally representative online consumer panel of U.S. adults.. E-mail invitations were sent to a random sample of panel members who reported at pre-screening that they had high cholesterol or high BMI. One email reminder was sent after four days if no response had been received. See Table 1 for demographic information.

Method

Design.

We conducted a randomized experiment with print and video ads. For each type of ad, we manipulated the type of comparison label (named, unnamed), the comparison attribute (indication, dosing, mechanism of action), the presence or absence of a comparison visual, and included a control group (Figure 1).

Print stimuli.

We created a print ad for a fictitious high-cholesterol prescription drug, Plevoral, (see Figure 3). Replicating Experiment 1, the comparison label was manipulated by either making a named competitor comparison (i.e., comparing Plevoral to Crestor, an actual cholesterol prescription drug), an unnamed competitor comparison (i.e., comparing Plevoral to “other prescription drugs”), or no comparison (control). Two additional features were manipulated: comparison theme (indication, dosing, MOA) and comparison visual (visual present, not present). To maintain rigorous experimental control, all of the ads in Experiment 2 contained all three of the additional benefits: the drug treated high cholesterol and suppressed appetite (indication), was delivered in patch form (dosing), and worked in both the liver and the intestines (MOA). The difference among conditions was the addition of a comparative claim that highlighted each benefit (e.g., “unlike other prescription drugs, which only treat high cholesterol, Pleverol treats high cholesterol and provides an appetite suppresant that helps patients lose weight”).

Figure 3.

Figure 3.

Two example ads from Experiment 2: the print control condition ad (left) and the print unnamed, visual-present, indication comparison condition ad (right). All print ad manipulations occurred within the solid horizontal bar across the middle of the page and the lower right quadrant of the ad. The video conditions included the same actor, theme, and a similar script as the print ads.

Video stimuli.

The 90-second video ad contained two on-screen actors, one narrator, and on-screen superimposed text. The actors and visual setting were the same as the print ad described above. As with the print ads, all video ads contained information about what condition the drug treats, a statement of important risks, and additional content that varied based on the experimental condition (i.e., the same manipulations as used in the print ad). The control ad was identical to the experimental ads except that, during the part of the ad where the visuals and comparisons would be, participants saw the Plevoral logo fading into a blue background (this image was used for all nonvisual manipulations).

Procedure.

All invited participants were randomly assigned to 1 of 26 experimental conditions by the project statistician (using a computer-generated randomization command in SAS) (Figure 1). Participants who viewed print ads could navigate back and forth between the pages of the ad without time restrictions. Participants who viewed video ads were required to view the ad twice in succession to ensure they were sufficiently exposed to it. After viewing the ads, participants completed a brief questionnaire. Participants received points for participating in a study, which could later be exchanged for vouchers and gifts at a partner network.

Outcome measures.

We used the same outcome measures as in Experiment 1, with only slight modifications (e.g., changing the name of the fictitious drug from Kesterin to Plevoral, the condition from osteoarthritis to high cholesterol, and small changes to question wording to account for these changes). For benefit recall, correct responses included any benefit that was mentioned in the ad; this included the primary indication (i.e., reduces high cholesterol) as well as the benefits that were featured in the comparisons (e.g., appetite suppressant, delivered as a skin patch, targets cholesterol in two ways). Inter-rater reliability was high for benefit recall (κ = .98, print; κ = .91, video) and risk recall (κ = .92, print; κ = .98, video).

Analysis.

We conducted analyses for print and video ads separately. Our main analyses consisted of 2 (comparison label: named competitor, unnamed competitor) × 3 (comparison theme: indication, dosing, MOA) × 2 (comparison visual: present, not present) three-way ANOVAs for each dependent variable. Initial analyses included all interaction terms for this fully factorial design; however, subsequently we dropped insignificant interaction terms not related to the dependent variable and report only the reduced models here. We also examined one-way ANOVAs to test whether dependent variables differed by comparison label (control [noncomparative], named competitor, unnamed competitor). Effect size and directions were confirmed by planned comparisons using a Bonferroni-adjusted p-value of .017 to control for Type I error. As in Experiment 1, all Likert scales were treated as interval rather than ordinal data and analyses were performed on weighted data using SUDAAN 11.0.

Results

Print ad conditions.

Table 4, which is organized by the different experimental groups viewing the print DTC ads, displays the weighted means and standard errors for the dependent variables.

Table 4.

Experiment 2, Print: Efficacy Perceptions, Risk Perceptions, and Recall: Weighted Means and Standard Errors

Recall – Percent Correct Perceived Efficacy Perceived Risk
Benefit
(0–100 scale)
Risk
(0–100 scale)
Comparative Efficacy
(1–7 scale)
Noncomparative Comparative Risk
(1–7 scale)
Noncomparative
Efficacy Magnitude
(1–7 scale)
Efficacy Likelihood
(0–100 scale)
Risk Magnitude
(1–7 scale)
Risk Likelihood
(0–100 scale)
Comparison label
 Named (n = 1,307) 18.05 (0.62) 10.35 (0.54) 5.33 (0.06) 4.77 (0.06) 60.15 (1.15) 3.95 (0.06) 4.88 (0.07) 35.33 (1.40)
 Unnamed (n = 1,307) 19.19 (0.60) 9.85 (0.46) 5.08 (0.06)* 4.74 (0.06) 60.52 (1.08) 4.09 (0.06) 4.95 (0.07) 30.68 (1.09)*
 Control (n = 213) 19.51 (1.29) 11.78 (1.20) 4.92 (0.14)* 4.49 (0.13) 62.42 (2.80) 3.80 (0.13) 4.99 (0.19) 33.76 (2.83)
Comparison theme
 Indication (n = 897) 18.19 (0.73) 10.15 (0.64) 5.19 (0.07) 4.76 (0.08) 60.36 (1.41) 4.05 (0.07) 4.93 (0.08) 32.48 (1.66)
 Dosing (n = 892) 18.42 (0.73) 10.07 (0.60) 5.17 (0.06) 4.78 (0.07) 59.22 (1.33) 3.99 (0.07) 4.89 (0.08) 32.77 (1.42)
 MOA (n =825) 19.38 (0.79) 10.05 (0.58) 5.25 (0.07) 4.72 (0.08) 61.55 (1.30) 4.03 (0.07) 4.92 (0.09) 33.44 (1.49)
Comparison visual
 Visual (n = 1,302) 18.78 (0.65) 10.03 (0.49) 5.23 (0.06) 4.67 (0.07) 60.06 (1.20) 4.09 (0.06) 4.96 (0.07) 32.89 (1.17)
 No Visual (n = 1,312) 18.51 (0.58) 10.15 (0.51) 5.18 (0.05) 4.83 (0.06) 60.60 (1.03) 3.97 (0.05) 4.87 (0.06) 32.86 (1.31)
*

Significantly from the named comparison condition within each column. Benefit and risk recall are reported as the percent correct, from 0 (none correct) to 100 (all correct). Comparative efficacy was assessed on a scale of 1 = much worse [than other drugs] to 7 = much better [than other drugs]. Efficacy magnitude was assessed on a scale of 1 = [would help] very little to 7 = [would help] a lot. Efficacy likelihood was assessed on a scale of 0 (Kesterin will work for none of the people taking it) to 100 (Kesterin will work for all of the people taking it).Comparative risk was assessed on a scale of 1 = much safer [than other drugs] to 7 = much riskier [than other drugs]. Risk magnitude was assessed on a scale of 1 =[side effects are] not at all serious to 7 = [side effects are] very serious. Risk likelihood was assessed on a scale of 0 (no people taking Kesterin will have side effects) to 100 (all people taking Kesterin will have side effects).

Perceived efficacy.

One-way ANOVAs with planned comparisons revealed that comparative efficacy differed significantly across the three comparison label conditions, F (2, 2,790) = 7.42, p < .001. Participants who saw the named competitor ad had significantly greater comparative efficacy perceptions than those who saw the control ad (d = .36, p = .01). The difference between the unnamed competitor ad and control group was not statistically significant. In addition, a factorial between-subjects ANOVA (excluding the control group) found a main effect of comparison label on comparative efficacy (F [2, 2,782] = 11.28, p < .001), such that perceived comparative efficacy was greater for the named competitor groups than unnamed competitor groups (d = .22). There were no significant effects for comparison subject or visuals.

Perceived risk.

A factorial between-subjects ANOVA (excluding the control group) found a main effect of comparison label on perceived likelihood of risk (F [2, 2,782] = 7.02, p =.008), with participants in the named competitor group reporting higher perceived likelihood of risk than participants in the unnamed competitor group (d = −.12,). There were no significant effects for comparison subject or visuals.

Video ad conditions.

Table 5 displays the weighted means and standard errors for the dependent variables, separated by the different experimental groups.

Table 5.

Experiment 2,Video: Efficacy, Risk Perceptions, and Recall: Weighted Means and Standard Errors

Recall – Percent Correct Perceived Efficacy Perceived Risk
Benefit (0–100 scale) Risk (0–100 scale) Comparative Efficacy (1–7 scale) Non-comparative Comparative Risk (1–7 scale) Non-comparative
Efficacy Magnitude (1–7 scale) Efficacy Likelihood (0–100 scale) Risk Magnitude (1–7 scale) Risk Likelihood (0–100 scale)
Comparison label
 Named (n =1,186) 21.23 (0.61) 9.04 (0.49) 5.55 (0.06) 4.87 (0.06) 65.32 (1.18) 3.56 (0.07) 4.49 (0.08) 33.08 (1.33)
 Unnamed (n = 1,120) 20.62 (0.62) 9.02 (0.56) 5.14 (0.06)* 4.69 (0.07) 62.62 (1.46) 3.66 (0.06) 4.38 (0.08) 33.47 (1.47)
 Control (n = 184) 18.39 (1.38) 7.73 (1.19) 4.77 (0.12)* 4.39 (0.15)* 65.99 (3.46) 3.74 (0.11) 4.51 (0.14) 32.32 (3.11)
Comparison theme
 Indication (n = 752) 23.48 (0.68) ^ 9.09 (0.68) 5.41 (0.07)^ 4.73 (0.08) 62.02 (1.56) 3.52 (0.07) 4.43 (0.09) 34.68 (1.73)
 Dosing (n =767) 19.24 (0.72) 8.54 (0.57) 5.13 (0.07) 4.81 (0.08) 64.58 (1.85) 3.75 (0.09) 4.31 (0.11) 33.22 (1.76)
 MOA (n = 787) 19.90 (0.83) 9.45 (0.67) 5.50 (0.07) ^ 4.81 (0.10) 65.57 (1.38) 3.57 (0.07) 4.56 (0.10) 31.77 (1.63)
Comparison visual
 Visual (n = 1,157) 21.23 (0.63) 8.17 (0.51) 5.39 (0.06) 4.73 (0.07) 63.80 (1.33) 3.57 (0.06) 4.47 (0.08) 33.07 (1.46)
 No Visual (n = 1,149) 20.65 (0.59) 9.82 (0.53) 5.31 (0.06) 4.84 (0.06) 64.20 (1.31) 3.64 (0.06) 4.40 (0.08) 33.45 (1.34)
*

Significantly different from the named comparison condition within each column.

Significantly different from the unnamed comparison condition within each column.

^

= Significantly different from the dosing condition within each column.

Significantly different from the MOA condition within each column

Significantly different from the no visual condition within each column.. Benefit and risk recall are reported as the percent correct, from 0 (none correct) to 100 (all correct). Comparative efficacy was assessed on a scale of 1 = much worse [than other drugs] to 7 = much better [than other drugs]. Efficacy magnitude was assessed on a scale of 1 = [would help] very little to 7 = [would help] a lot. Efficacy likelihood was assessed on a scale of 0 (Kesterin will work for none of the people taking it) to 100 (Kesterin will work for all of the people taking it).Comparative risk was assessed on a scale of 1 = much safer [than other drugs] to 7 = much riskier [than other drugs]. Risk magnitude was assessed on a scale of 1 =[side effects are] not at all serious to 7 = [side effects are] very serious. Risk likelihood was assessed on a scale of 0 (no people taking Kesterin will have side effects) to 100 (all people taking Kesterin will have side effects).

Perceived efficacy.

A one-way ANOVA with planned comparisons found that comparative efficacy differed significantly across the three comparison label conditions, F (2, 2,465) = 23.62, p < .001. Participants in the named and unnamed competitor groups had significantly greater comparative efficacy perceptions (d = .73, p < .001) than participants in the control group (d = .35, p < .005). See Table 5.

A factorial between-subjects ANOVA (excluding the control group) found significant main effects for comparison label and comparison theme on participants’ perceptions of comparative efficacy (Comparison label: F [2, 2,459] = 6.85, p = .001; Comparison theme: F [2, 2,459] = 24.58, p < .001 ). The named competitor group reported greater comparative efficacy than the unnamed competitor group (d = .37). Planned comparisons between the comparison theme groups showed that comparative efficacy perceptions were higher for those in the indication group (d = .25, p = .004) and the MOA group (d = .32, p < .001) compared with the dosing group (Table 5).

A one-way ANOVA with planned comparisons also revealed that the magnitude of perceived efficacy differed significantly across the three comparison label conditions, F (2, 2,459) = 4.88, p = .01. Participants in the named competitor group, compared with participants in the control group, reported that Plevoral would help lower their cholesterol more (d = .36, p = .004). The difference between the unnamed competitor group and control group was not statistically significant (p = .08).

Perceived risk.

A factorial between-subjects ANOVA found that, although there were no significant main effects, there was a significant interaction effect between visuals and comparison theme on the perceived likelihood of risk, F (2, 2,459) = 4.56, p = .01. However, we found no significant differences at p < .017 when we examined three planned comparisons between visual versus no visual within each of the three comparison theme groups (indication, dosing, MOA).

Recall.

A factorial between-subjects ANOVA found a significant main effect of comparison theme on benefit recall (F [2, 2,459] = 10.66, p < .001). Planned comparisons revealed that the indication group had significantly greater benefit recall than both the dosing (d = .36, p < .001) and MOA groups (d = .29, p < .001). No significant difference was found in the benefit recall between the MOA groups and dosing groups. With respect to risk recall, comparison visual had a significant main effect (F [2, 2,459] = 5.17, p = .02), such that people who saw the visual depiction of the comparison had significantly lower risk recall (d = −.15) than those who did not see a visual (see Table 5 for means).

Discussion

Experiment 2 differed from Experiment 1 in three ways. By examining 1) a different medical condition and prescription drug to treat it, 2) drug attributes other than efficacy (i.e., drug-label comparisons of indication, dosing, or MOA), and 3) the presence of visuals depicting these comparisons, Experiment 2 provides additional information on the comparisons in DTC ads that may influence consumers’ perceptions.

Although Experiment 2 did not manipulate efficacy comparisons directly, we still found differences in efficacy perceptions. As in Experiment 1, we found that, in print conditions, named competitors resulted in greater comparative perceived efficacy than unnamed competitors or no competitors at all. This was replicated in the video conditions, where unnamed and named competitors both resulted in greater comparative perceived efficacy than no competitors at all. These findings support our hypotheses and build on Experiment 1 because, in this case, differences in perceived efficacy reflect consumer inferences about efficacy from peripheral characteristics of the drug. This is consistent with previous work (Harris, 1981; Harris et al., 1983) that showed that consumers widely make inferences from text and attribute the inferences to the text itself.

Unlike Experiment 1, we found in Experiment 2 that named competitors in the print ad conditions led to higher perceived risk than unnamed competitors. Although our experimental designs were similar, they were not identical, as we did not manipulate direct efficacy comparisons in Experiment 2. Also, we used different medical conditions in each experiment for generalizability. Individuals with pain, which has noticeable and disruptive symptoms, may differ from individuals who have high cholesterol, a silent condition. We used real-life examples for competitor drugs in the named conditions. It is possible that participants had personal experiences with these drugs, causing differential patterns of responses in the two experiments. Crestor may have had more recent publicity and promotion than Celebrex, leading to particular opinions about a drug compared to it. Future research examining other medical conditions and other competitors would clarify these findings. In addition, future research should inquire about perceptions of the competitor drugs. We did not in these studies, thus we cannot address participants’ assessment of these drugs and whether that affected our findings regarding the test drugs.

In video ads, the indication and MOA comparisons resulted in greater perceived comparative efficacy than the dosing comparison. Both indication and MOA comparisons provided an additional benefit over competitors that could be seen as related to the efficacy of the drug (what it does or how it works). Here, dosing comparisons represented a simple convenience claim. Viewers may think of this as merely a personal preference and thus not a characteristic that would affect how well the drug works.

Print ad conditions in both Experiments 1 and 2 revealed no differences in benefit or risk recall. For video ads in Experiment 2, benefit recall differed by comparison attribute: benefit recall was greater for participants in the indication group compared with the dosing and MOA groups. One possible explanation is that highlighting the weight-loss benefit in the indication groups was more salient than highlighting the mode of application or how the drug worked. Salience could have led to increased attention and, subsequently, greater benefit recall (Petty & Cacioppo, 1986). Nonetheless, recall for the MOA was not consistent across outcome measures. That is, whereas participants in the MOA groups perceived greater comparative efficacy than those in the dosing conditions, they did not recall more benefits. Participants in the indication group both perceived greater and recalled more of the benefits. The indication comparison is the closest drug attribute to efficacy, indicating what the drug can do as opposed to how it works or the convenience of taking it.

We also examined whether visuals depicting each comparison would influence our outcome measures and found one effect for visuals in video ads only: visual presence led to lower levels of risk recall. Because all comparisons focused on non-risk aspects of the drug, it is plausible that comparison visuals distracted viewers from the risk information. That we did not find a similar effect in print ads may be a function of the self-paced nature of the medium. Also, the visual manipulation in print ads was located in a quadrant of the ad and could be ignored if participants chose to. Visual depictions in the video ad were harder to ignore, as they appeared on the entire screen.

General Discussion

An emphasis on comparative effectiveness research, along with current indirect comparative DTC advertising, spurred our examination of whether comparative ads might affect consumers’ ability to make informed treatment decisions. Scant research has examined comparative prescription drug ads (e.g., Mitra et al., 2006). The current set of experiments addressed gaps in the literature and extended previous findings to the context of DTC ads. One of our strongest findings is that both efficacy and non-efficacy comparisons can result in greater perceived comparative efficacy than the same ads without comparisons. Previous research has examined this relationship in packaged-goods ads (Grewal et al., 1997); our work shows that these findings also apply to DTC ads, which have greater regulatory restrictions than packaged-goods ads. Although certain types of approved label-based, non-efficacy comparisons have been seen in DTC ads, this research suggests that consumers can construe even non-efficacy comparisons as reflecting how well the drug works. Moreover, we showed that risk perceptions can be affected by comparative claims, even when the claims do not concern risk issues. This supports previous research that suggested that claims can be misleading without being false (Mitra et al., 2006; Xie & Baush, 2011).

Our study showed, however, that perhaps not all comparisons affect the benefit-risk balance. We found that indication and MOA comparisons resulted in greater perceived efficacy than dosing comparisons. Comparisons of convenience or choice may not carry the same weight as characteristics more intimately related to how a drug works or what it can do. Thus, care should be taken to examine each comparison individually before generalizing the findings of this study. For example, we did not examine risk-related comparisons, such as “Drug X does not require blood tests.” Given our finding that introducing comparative benefit claims may influence beliefs about the risks of the drug, a direct comparison of risk-related information would reveal whether the opposite would also be true, as suggested in previous research (Bowman, 2002; Cohen et al., 2002; Davis, 2007; Hoek, 2008; Kees et al., 2008).

Previous research found that the majority of comparative ads use named claims (Kalro, Sivakumaran, & Marathe, 2010) and, as such, most comparative advertising research has focused on named comparative ads versus noncomparative ads. One study showed that when individuals think analytically about an ad, named comparisons result in greater perceived differences between the brands than unnamed comparisons (Kalro et al., 2013). Our findings were consistent with this study, showing that named comparisons are more influential than unnamed comparisons in DTC ads.

Consumers must consider a drug’s risks as well as its benefits to make informed decisions. We found that, in one case, comparisons led to lower perceived risk, whereas in another they led to higher perceived risk. In addition, adding visuals depicting benefit comparisons to video ads decreased risk recall. Thus, it appears that comparisons may affect the whole benefit-risk balance as consumers decide whether to discuss drugs with their healthcare professionals. Further research is required to disentangle the direction of influence over perceived risk.

Another contribution of this study is that we examined both print and video ads, whereas most DTC effectiveness studies have examined print stimuli alone. Although our perceived efficacy findings were consistent across print and video ads, the two modes differed in perceived risk, recall, and the effect (or non-effect) of visual depictions. Because DTC ads exist across a wide spectrum of media, it is crucial to determine whether effects differ depending upon the media. These findings highlight the importance of studying both static and dynamic message channels rather than assuming they will display the same effects.

This set of nationally representative, randomized experiments addressed methodological problems identified in previous studies, including the use of student samples, homemade ads, and measurement issues (Miniard et al., 1994; Rogers & Williams, 1989). Our studies used two nationally representative weighted samples to ensure valid and reliable population estimates of two distinct medical conditions. We also developed high-quality, realistic ads, and included both comparative and noncomparative measures of outcomes.

Most of our effects were small to medium. In general, effects in the video conditions (range of effect sizes = .15-.73) were larger than effects in the print conditions (range of effect sizes = .11-.40). Across studies, comparisons between the named competitor condition and the control condition were larger (range of effect sizes = .20-.73) than comparisons between the named and unnamed competitor conditions (range of effects sizes = .11-.37). The comparison theme had larger effects (range of effect sizes = .25-.36) than the comparison visual (d = .15 for the one significant effect).

There were some limitations to the study methods. First, we recruited participants who had been diagnosed with the target conditions so the ads would be more personally relevant to them; therefore, participants may have been more engaged than the typical consumer who sees a DTC ad. Second, we did not replicate the direct efficacy findings in Experiment 2, so we do not know whether the two samples would respond identically to those types of comparisons. Second, we only assessed perceptions and recall—not actual behaviors—so it is unclear whether the findings translate to the whole informed decision-making process. Third, because of a study administration error in Experiment 1, we could not compare the video control condition with the named and unnamed competitor conditions. Finally, in the second experiment, we examined participant responses to a cholesterol lowering drug. Although this approximates existing, permissable DTC ads, we did not address the issue of overall cardiovascular outcomes, which is ultimately the goal of cholesterol lowering treatments. Future research should examine those outcomes.

Overall, our findings support further investigation of the effects of comparative claims in DTC advertising. Our outcomes suggest that comparative claims in DTC ads could mislead consumers about a drug’s efficacy and risk. As shown here, comparisons of drug attributes may influence the perceived efficacy of the drug, even when the ads do not compare efficacy. Non-efficacy comparisons may even affect the perceived risk of the product. Further study is warranted to explore other types of efficacy and non-efficacy comparisons. Additionally, delineating how comparisons affect prescribing decisions by healthcare professionals and decisions to request or take prescription drugs by consumers is essential to ensure the safe use of prescription drugs. Overstatement of the efficacy and minimization of the risks in prescription drug ads are among the most common reasons for FDA’s untitled and warning letters to pharmaceutical companies regarding DTC advertising. The findings in the current studies suggest that comparative ads may imply efficacy even when including unrelated comparative claims, and these comparative claims may influence risk perceptions as well as perceived efficacy. Industry, regulators, and healthcare providers should be sensitive to the possibility that consumers and patients may be misled by these claims.

References

  1. Ang SH, & Leong SB (1994). Comparative advertising: superiority despite interference? Asia Pacific Journal of Management, 11(1), 33–46. [Google Scholar]
  2. Applications for FDA Approval to Market a New Drug, 21 C.F.R. §314.126 (2008). Retrieved from http://edocket.access.gpo.gov/cfr_2008/aprqtr/pdf/21cfr314.126.pdf
  3. Barone MJ, Rose RJ, Miniard PW, & Manning KC (1999). Enhancing the detection of misleading comparative advertising. Journal of Advertising Research, 39, 43–50. [Google Scholar]
  4. Beard FK (2013). A history of comparative advertising in the United States. Journalism & Communication Monographs, 15(3), 114–216. [Google Scholar]
  5. Bowman ML (2002). The perfidy of percentiles. Archives of Clinical Neuropsychology, 17(3), 295–303. [PubMed] [Google Scholar]
  6. Burke RJ (1988). Deception by implication: An experimental investigation. Journal of Consumer Research, 14, 483–494. [Google Scholar]
  7. Centers for Medicare and Medicaid Services. (2011). National health expenditure data fact sheet. Retrieved from http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NHE-Fact-Sheet.html
  8. Cohen DJ, Ferrell JM, & Johnson N (2002). What very small numbers mean. Journal of Experimental Psychology: General, 131(3), 424–442. [DOI] [PubMed] [Google Scholar]
  9. Cohen J (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. doi: 10.1177/001316446002000104 [DOI] [Google Scholar]
  10. Cohen J (1969). Statistical power analysis for the behavioral sciences. San Diego, CA: Academic Press. [Google Scholar]
  11. Curry TJ, Jarosch J, & Pacholok S (2005). Are direct to consumer advertisments of prescription drugs educational?: Comparing 1992 to 2002. Journal of Drug Education, 35(3), 217–232. [DOI] [PubMed] [Google Scholar]
  12. Davis JJ (2007). Consumers’ preferences for the communication of risk information in drug advertising. Health Affairs, 26(3), 863–870. [DOI] [PubMed] [Google Scholar]
  13. Demirdjian ZS (1983). Sales effectiveness of comparative advertising: An experimental field investigation. Journal of Consumer Research, 10, 362–364. [Google Scholar]
  14. Droge C, & Darmon RY (1987). Associative positioning strategies through comparative advertising: Attribute versus overall similarity approaches. Journal of Marketing Research, 24, 377–388. [Google Scholar]
  15. Gorn GJ, & Weinberg CB (1984). The impact of comparative advertising on perception and attitude: Some positive findings. Journal of Consumer Research, 11, 719–727. [Google Scholar]
  16. Government Accountability Office. (2006). Prescription drugs: Improvements needed in FDA’s oversight of direct-to-consumer advertising (GAO-07–54). Retrieved from http://gao.gov/products/GAO-07-54
  17. Grewal D, Kavanoor S, Fern EF, Costley C, & Barnes J (1997). Comparative versus noncomparative advertising: A meta-analysis. The Journal of Marketing, 61(4), 1–15. [Google Scholar]
  18. Griffin JP, Godfrey BM, & Sherman RE (2012). Regulatory requirements of the Food and Drug Administration would preclude product claims based on observational research. Health Affairs, 31(10), 2188–2192. [DOI] [PubMed] [Google Scholar]
  19. Grossbart S, Muehling DD, & Kangun N (1986). Verbal and visual references to competition in comparative advertising. Journal of Advertising Research, 15(1), 10–23. [Google Scholar]
  20. Harris RJ (1981). Inferences in information processing In Bower GH (Ed.), The psychology of learning and motivation (Vol. 15, pp. 81–128). New York: Academic Press, Inc. [Google Scholar]
  21. Harris RJ, Dubitsky TM, & Bruno KJ (1983). Psycholinguistic studies of misleading advertising In Harris RJ (Ed.), Information processing research in advertising (pp. 241–262). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. [Google Scholar]
  22. Hauber AB, Mohamed AF, Johnson FR, & Falvey H (2009). Treatment preferences and medication adherence of people with Type 2 diabetes using oral glucose-lowering agents. Diabetic Medicine, 26(4), 416–424. [DOI] [PubMed] [Google Scholar]
  23. Hoek J (2008). Ethical and practical implications of pharmaceutical direct-to-consumer advertising. International Journal of Nonprofit and Voluntary Sector Marketing, 13(1), 73–87. [Google Scholar]
  24. Iyer ES (1988). The influence of verbal content and relative newness on the effectiveness of comparative advertising. Journal of Advertising, 17(3), 15–21. [Google Scholar]
  25. Kalro AD, Sivakumaran B, & Marathe RR (2010). Comparative advertising in India: A content analysis of English print advertisements. Journal of International Consumer Marketing, 22(4), 377–394. [Google Scholar]
  26. Kalro AD, Sivakumaran B, & Marathe RR (2013). Direct or indirect comparative ads: The moderating role of information processing modes. Journal of Consumer Behaviour, 12(2), 133–147. [Google Scholar]
  27. Kalyanara G, & Phelan J (2013). The effect of direct to consumer advertising (DTCA) of prescription drugs on market share, sales, consumer welfare and health benefits. Academy of Health Care Management Journal, 9(1/2), 53. [Google Scholar]
  28. Kees J, Bone PF, Kozup J, & Ellen PS (2008). Barely or fairly balancing drug risks? Content and format effects in direct-to-consumer online prescription drug promotions. Psychology and Marketing, 25(7), 675–691. [Google Scholar]
  29. Kesselheim AS, & Avorn J (2012). The Food and Drug Administration has the legal basis to restrict promotion of flawed comparative effectiveness research. Health Affairs, 31(10), 2200–2205. [DOI] [PubMed] [Google Scholar]
  30. Malaviya P, Kisielius J, & Sternthal B (1996). The effect of type of elaboration on advertisement processing and judgment. Journal of Marketing Research, 33, 410–421. [Google Scholar]
  31. Marois R, & Ivanoff J (2005). Capacity limits of information processing in the brain. Trends in Cognitive Sciences, 9(6), 296–305. [DOI] [PubMed] [Google Scholar]
  32. Miniard PW, Barone MJ, Rose RL, & Manning KC (1994). A re-examination of the relative persuasiveness of comparative and noncomparative advertising. Advances in Consumer Research, 21(1), 299–303. [Google Scholar]
  33. Miniard PW, Rose RL, Barone MJ, & Manning KC (1993). On the need for relative measures when assessing comparative advertising effects. Journal of Advertising Research, 22, 41–57. [Google Scholar]
  34. Miniard PW, Rose RL, Manning KC, & Barone MJ (1998). Tracking the effects of comparative and noncomparative advertising with relative and nonrelative measures: A further examination of the framing correspondence hypothesis. Journal of Business Research, 41(2), 137–143. [Google Scholar]
  35. Mitra A, Swasy J, & Aikin K (2006). How do consumers interpret market leadership claims in direct-to-consumer advertising of prescription drugs? Advances in Consumer Research, 33, 381–387. [Google Scholar]
  36. Murphy J, & Amundsen M (1981). The communications-effectiveness of comparative advertising for a new brand on users of the dominant brand. Journal of Advertising Research, 10, 14–48. [Google Scholar]
  37. Pechmann C, & Ratneshwar S (1991). The use of comparative advertising for brand positioning: Association versus differentiation. Journal of Consumer Research, 18, 145–160. [Google Scholar]
  38. Pechmann C, & Stewart DW (1990). The effects of comparative advertising on attention, memory, and purchase intentions. Journal of Consumer Research, 17(2), 180–191. [Google Scholar]
  39. Petty RE, & Cacioppo JT (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123–192. [Google Scholar]
  40. Prasad VK (1976). Communications-effectiveness of comparative advertising: A laboratory analysis. Journal of Marketing Research, 13, 128–137. [Google Scholar]
  41. Priester JR, Godek J, Nayakankuppum DJ, & Park K (2004). Brand congruity and comparative advertising: When and why comparative advertisements lead to greater elaboration. Journal of Consumer Psychology, 14(1/2), 115–123. [Google Scholar]
  42. Rogers JC, & Williams TG (1989). Comparative advertising effectiveness: Practitioners’ perceptions versus academic research findings. Journal of Advertising Research, 29(5), 22–37. [Google Scholar]
  43. Rose RL, Miniard PW, Barone MJ, Manning KC, & Till BD (1993). When persuasion goes undetected: The case of comparative advertising. Journal of Marketing Research, 30(3), 315–330. [Google Scholar]
  44. Schwartz LM, Woloshin S, & Welch HG (2009). Using a drug facts box to communicate drug benefits and harms: Two randomized trials. Annals of Internal Medicine, 150(8), 516–527. [DOI] [PubMed] [Google Scholar]
  45. Scott LM (1994). Images in advertising: The need for a theory of visual rhetoric. Journal of Consumer Research, 21(2), 252–273. [Google Scholar]
  46. Snyder R (1992). Comparative advertising and brand evaluation: Toward developing a categorization approach. Journal of Consumer Psychology, 1(1), 15–30. [Google Scholar]
  47. Xie G-X, & Boush DM (2011). How susceptible are consumers to deceptive advertising claims? A retrospective look at the experimental research literature. The Marketing Review, 11(3), 293–314. [Google Scholar]

RESOURCES