Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Oct 28.
Published in final edited form as: J Clin Psychopharmacol. 2009 Apr;29(2):10.1097/JCP.0b013e31819a9181. doi: 10.1097/JCP.0b013e31819a9181

Study Design Affects Participant Expectations: A Survey

Bret Rutherford 1,1, Scott Rose 2, Joel Sneed 3, Steven Roose 4
PMCID: PMC3809916  NIHMSID: NIHMS511269  PMID: 19512982

Abstract

Introduction

Evidence suggests clinical trial participants have higher expectations of improvement when they know they are receiving active treatment vs. when they are aware they may receive placebo, but this has not been directly tested. The goal of this survey was to determine whether respondents report higher expectations of improvement in comparator vs. placebo-controlled clinical trials.

Method

A questionnaire describing two hypothetical clinical trials was distributed to undergraduates in an introductory psychology course. The questionnaire describes one trial in which medication is compared to placebo and another in which the same medication is compared to another medication. Respondents rated their expectations of improvement should they participate in each trial without knowing their specific treatment assignment. Questions measured the magnitude and likelihood of expected improvement on a 9 point Likert scale.

Results

Thirty-seven undergraduates, 69% female and mean age of 22.4±6.8 years, participated in the study. Respondents reported a significantly higher expected likelihood of improvement in a comparator trial compared to a placebo-controlled trial (7.2±2.1 vs. 5.3±1.6, t(36) = −4.96, p<.001). Similarly, they reported a significantly higher expected magnitude of improvement in a comparator trial compared to a placebo-controlled trial (7.2±1.9 vs. 4.9±1.4, t(35) = −6.74, p<.001).

Discussion

These results support the hypothesis that clinical trial design influences participant expectations of improvement. Study design may affect clinical outcomes and should be kept in mind when interpreting the results of antidepressant clinical trials.

Keywords: depression, expectancy, clinical trial design, placebo-controlled, comparator, antidepressant

INTRODUCTION

The placebo effect is a major component of the medication response observed in antidepressant clinical trials. A meta-analysis of 75 placebo-controlled antidepressant trials published between 1981 and 2000 found a mean medication response rate of 50%, compared to a mean placebo response rate of 30% (1). Another meta-analysis of published clinical trials of antidepressants reported that the placebo groups in these trials averaged 1.5 standard deviation units of improvement, which was 75% of the improvement shown in the antidepressant groups (2). Understanding the mechanism of the placebo effect is an important challenge for researchers studying depression, since high placebo response rates make it more difficult to detect a signal of efficacy for new antidepressants (3). Additionally, studying the placebo effect could lead to non-pharmacologic methods of optimizing antidepressant treatment and improving clinical outcome (4).

Patient expectations (their beliefs about how treatment will affect them) are hypothesized to be a major mechanism of the placebo effect (5). To date this hypothesis has not been directly tested in the treatment of depression, but circumstantial data from antidepressant clinical trials are consistent with this hypothesis. For example, higher patient expectations of improvement have been associated with greater likelihood of depression response and lower final depression scores in multiple clinical trials (6,7). Recent meta-analyses of antidepressant response rates in placebo-controlled (i.e., medication vs. placebo) vs. comparator (i.e., medication vs. medication) trials for depression found that the odds of responding to medication in comparator trials were significantly higher than the odds of responding in placebo-controlled trials for both adult and geriatric populations (8,9). Furthermore, in an analysis of 52 antidepressant trials, response to placebo was found to be higher in trials having more rather than fewer active treatment arms (i.e. with higher probability of the patient receiving active medication as opposed to placebo) (10). These studies suggest that antidepressant and placebo response rates are higher when patients believe they are receiving active treatment (i.e., have higher expectancy of therapeutic improvement) compared to when they are aware they may be receiving placebo.

One way to understand how clinical trial design may influence patient expectations is through the informed consent process, in which patients are told about the study design and the past effectiveness of the drugs to be used (11). The information patients receive about a drug modifies their expectations of its effects in a positive or negative direction (12). Studies have shown that placebos raised or lowered blood pressure and heart rate depending on patients’ knowledge of the effect of the active agent being studied (13). Patients given a muscle relaxant but told it was a stimulant had greater muscle tension than those told it was a relaxant (14), and patients inhaling a bronchoconstrictor but told it was a bronchodilator had less airway resistance and dyspnea than those told its true identity (15). These results indicate that the subject’s knowledge of the activity and presumed potency of the active drug influence their expectations of improvement, which result in various placebo responses (16).

While the studies reviewed suggest that antidepressant study design affects patient expectations and patient outcome, these hypotheses remain untested. The objective of this study was to determine whether study design affects participant expectations in a normal sample. Undergraduates were asked to complete a questionnaire comprising two hypothetical research studies (one of which utilized a placebo-controlled study design and one a comparator) followed by questions about their expectations of clinical improvement in each study. The main hypotheses were that ratings of expected improvement would be significantly higher for a comparator study design vs. a placebo-controlled study design.

MATERIALS AND METHODS

Sample

Undergraduates enrolled in an introductory psychology course were invited to participate in this study, which was approved by the institutional review board at Queens College. Participants received extra credit as a result of their participation as per Queens College rules.

Measures

The questionnaire contained two vignettes describing hypothetical research studies. In the first vignette, a potential research participant has developed a rash and is invited to enroll in a placebo-controlled trial of an investigational rash treatment (Study I) vs. pill placebo. In the second vignette, a potential research participant is instead offered enrollment in a comparator trial of the investigational (non-FDA approved) rash treatment vs. an established (FDA approved) rash treatment (Study II). Respondents were informed that the two treatments had similar side effects and apparent efficacy, though the established (FDA approved) medication had been more extensively tested to date. The vignettes were constructed to simulate the epistemic conditions in actual placebo-controlled and comparator clinical trials.

The two critical dimensions of expectancy to be measured are the expected likelihood of therapeutic improvement and the expected magnitude of therapeutic improvement (17). Several rating scales for expectancy have been developed, but most have been used in only a small number of studies and have limited psychometric data available. A widely used scale that measures both of these dimensions of patient expectations is the Credibility and Expectancy Scale (CES) (18). Psychometric study of the CES has demonstrated that it derives two factors (credibility and expectancy) that are stable across different populations (19). It has been shown to have high internal consistency, with a Cronbach’s α of 0.79–0.90 for the expectancy factor, 0.81–0.86 for the credibility factor, and a standardized α of 0.84 for the CES composite score (19). Test-retest reliability over a one-week period was also found to be good at 0.82 for expectancy and 0.75 for credibility (19). Versions of the CES have been used to measure treatment credibility and patient expectation in several psychotherapy and pharmacotherapy studies (20).

A shortened form of the CES was used to measure respondent expectations in the current study, asking respondents the same two questions following each of the hypothetical research study vignettes: (1) How helpful do you think participating in Study (I or II) will be for your condition?; (2) How likely is it that you will experience significant improvement in your condition as a result of participating in Study (I or II)? Subjects rated their responses on a 9 point Likert scale from “not at all helpful” to “very helpful” (Question 1) and “not at all likely” to “very likely” (Question 2).

Procedure

After signing informed consent, respondents were informed by the experimenter that there were two vignettes, each of which is followed by two questions. Respondents then read through instructions that told them to imagine that they were a subject enrolling in the hypothetical studies depicted in the readings, to think about they would expect to happen as a result of their participation, and to limit their answers to the information contained in the survey.

Statistical Analysis

The primary hypothesis was tested by comparing the difference in the mean responses to question 1 and 2 across the two hypothetical studies using paired samples t tests. All significance tests were performed at the p<0.05 level.

RESULTS

Thirty-seven undergraduates (69% female) having a mean age of 22.4±6.8 years participated in the study. Respondents reported a significantly higher expected likelihood of improvement in Study II compared to Study I (7.2±2.1 vs. 5.3±1.6, t(36) = −4.96, p<.001). Similarly, they reported a significantly higher expected magnitude of improvement in Study II compared to Study I (7.2±1.9 vs. 4.9±1.4, t(35) = −6.74, p<.001). These results indicate that respondents expected greater improvement in a comparator study design (in which they are assured of receiving medication) as compared to a placebo-controlled study design. The effect size for these comparisons were large (d=1.0 for question regarding expected likelihood of improvement and d=1.4 for question regarding expected magnitude of improvement).

DISCUSSION

In this survey of participant expectations, respondents reported significantly higher expected likelihood and magnitude of clinical improvement in a comparator vs. a placebo-controlled study design. These experimental findings are consistent with recent meta-analyses of adult and geriatric antidepressant clinical trials that demonstrate higher response and remission rates to medication in comparator vs. placebo-controlled trials.

A primary significance of these results relates to the interpretation of clinical trials in which treatment cells may differ in expectancy. For example, the Treatment for Adolescents with Depression Study (TADS) randomized adolescents with Major Depression to CBT alone, fluoxetine alone, combined CBT and fluoxetine, and pill placebo (21). This study design is commonly used for comparing medication, psychotherapy, and combined treatments. However, the study compares blinded treatments (fluoxetine or pill placebo) to unblinded treatments (CBT and combined CBT and fluoxetine). In other words, patients in the CBT alone condition knew they were receiving psychotherapy (active treatment), while patients taking pills did not know whether they were fluoxetine (active) or placebo (inactive). Similarly, patients in the combined cell knew they were receiving two active treatments rather than one (CBT alone) or possibly none (fluoxetine and pill placebo). The current results suggest that patients have higher expectations when they know they are receiving active treatment, which may significantly bias comparisons of unblinded (i.e., open psychotherapy or combined treatment) to blinded treatment (i.e., placebo-controlled medication).

Data from this study may also inform what psychiatrists in clinical practice tell their patients to expect about their chances of improvement on a given medication. In discussing this question, a psychiatrist practicing evidence based medicine is informed by research studies testing the proposed medication for depression. However, there are many studies to choose between when gathering evidence about the anticipated effectiveness of antidepressants (e.g., open, placebo-controlled, and comparator studies). Given that placebo is not administered in clinical practice, comparator trials and open studies may approximate more closely the clinical effectiveness of antidepressants and be more accurate figures to reference when treating patients.

The results of this study needs to be interpreted in the context of several limitations. First, this was a survey study of normal subjects, and the results cannot be directly generalized to depressed patients, whose very illness may affect their expectancies about how treatment will affect them. Second, this survey study used hypothetical rash treatments rather than antidepressant agents, so again the results are not directly generalizable to antidepressant clinical trials. However, given that the majority of respondents likely had no current or past history of depression, it was judged better to use an illness that they may have had or could reasonably expect to experience at some point. Third, the two vignettes used in this study were not counterbalanced across study participants, and it is possible that subjects’ reactions to the first vignette influenced their responses to the second.

This survey study was intended to be a first step in assessing the impact of study design on participant expectations. The next step in investigating this issue is randomizing a single pool of subjects to placebo-controlled or comparator trial designs and prospectively measuring expectancy and depression outcome. Such a clinical trial is now underway by the current authors.

Contributor Information

Bret Rutherford, Department of Psychiatry, Columbia University, New York State Psychiatric Institute, New York, NY.

Scott Rose, Department of Psychology, Queens College, City University of New York, Queens, NY.

Joel Sneed, Department of Psychology, Queens College, City University of New York, Department of Psychiatry, Columbia University, New York State Psychiatric Institute, New York, NY.

Steven Roose, Department of Psychiatry, Columbia University, New York State Psychiatric Institute, New York, NY.

References

  • 1.Walsh BT, Seidman SN, Sysko R, et al. Placebo Response in Studies of Major Depression: Variable, Substantial, and Growing. JAMA. 2002;287:1840–1847. doi: 10.1001/jama.287.14.1840. [DOI] [PubMed] [Google Scholar]
  • 2.Kirsch I, Sapirstein G. Listening to prozac but hearing placebo: A meta-analysis of antidepressant medication. Prev Treat. 1998 posted at http://journals.apa.org/prevention/volumeI/pre0010002a.html.
  • 3.Khan A, Detke M, Khan SRF, et al. Placebo Response and Antidepressant Clinical Trial Outcome. J Nerv Ment Dis. 2003;191:211–218. doi: 10.1097/01.NMD.0000061144.16176.38. [DOI] [PubMed] [Google Scholar]
  • 4.Andrews G. Placebo response in depression: bane of research, boon to therapy. Br J Psychiatry. 2001;178:192–194. doi: 10.1192/bjp.178.3.192. [DOI] [PubMed] [Google Scholar]
  • 5.Haour F. Mechanisms of the Placebo Effect and of Conditioning. Neuroimmunomodulation. 2005;12:195–200. doi: 10.1159/000085651. [DOI] [PubMed] [Google Scholar]
  • 6.Krell HV, Leuchter AF, Morgan M, et al. Subject Expectations of Treatment Effectiveness and Outcome of Treatment with an Experimental Antidepressant. J Clin Psychiatry. 2004;65:1174–1179. doi: 10.4088/jcp.v65n0904. [DOI] [PubMed] [Google Scholar]
  • 7.Meyer B, Pilkonis PA, Krupnick JL, et al. Treatment Expectancies, Patient Alliance, and Outcome: Further Analyses from the National Institute of Mental Health Treatment of Depression Collaborative Research Program. J Consult Clin Psychol. 2002;70:1051–1055. [PubMed] [Google Scholar]
  • 8.Rutherford BR, Sneed JR, Roose SP. Does Study Design Affect Outcome? The Effects of Placebo Control and Treatment Duration in Antidepressant Trials. Psychotherapy and Psychosomatics. doi: 10.1159/000209348. In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sneed JR, Rutherford BR, Rindskopf D, et al. Design Makes a Difference: Antidepressant Response Rates in Placebo-controlled versus Comparator Trials in Late Life Depression. Am J Geri Psychiatry. 2008;16:65–73. doi: 10.1097/JGP.0b013e3181256b1d. [DOI] [PubMed] [Google Scholar]
  • 10.Khan A, Kolts RL, Thase ME, et al. Research Design Features and Patient Characteristics Associated with the Outcome of Antidepressant Clinical Trials. Am J Psychiatry. 2004;161:2045–2049. doi: 10.1176/appi.ajp.161.11.2045. [DOI] [PubMed] [Google Scholar]
  • 11.Swartzman LC, Burkell J. Expectations and the placebo effect in clinical drug trials: Why we should not turn a blind eye to unblinding, and other cautionary notes. Clin Pharm Ther. 1998;64:1–7. doi: 10.1016/S0009-9236(98)90016-9. [DOI] [PubMed] [Google Scholar]
  • 12.Barsky AJ, Saintfort R, Rogers MP, et al. Nonspecific Medication Side Effects and the Nocebo Phenomenon. JAMA. 2002;287:622–627. doi: 10.1001/jama.287.5.622. [DOI] [PubMed] [Google Scholar]
  • 13.Ross M, Olson JM. An Expectancy-Attribution Model of the Effects of Placebos. Psych Rev. 1981;88:408–437. [PubMed] [Google Scholar]
  • 14.Flaten MA, Simonsen T, Olsen H. Drug-related information generates placebo and nocebo responses that modify the drug response. Psychosom Med. 1999;61:250–255. doi: 10.1097/00006842-199903000-00018. [DOI] [PubMed] [Google Scholar]
  • 15.Luparello TJ, Leist N, Lourie CH, et al. The Interaction of Psychologic Stimuli and Pharmacologic Agents on Airway Reactivity in Asthmatic Subjects. Psychosom Med. 1970;32:509–513. doi: 10.1097/00006842-197009000-00009. [DOI] [PubMed] [Google Scholar]
  • 16.Benedetti F. How the Doctor’s Words Affect the Patient’s Brain. Eval Health Prof. 2002;25:369–386. doi: 10.1177/0163278702238051. [DOI] [PubMed] [Google Scholar]
  • 17.Kirsch I. Specifying Nonspecifics: Psychological Mechanisms of Placebo Effects. In: Harrington A, editor. The placebo effect: An interdisciplinary exploration. Cambridge, MA: Harvard University Press; 1997. [Google Scholar]
  • 18.Borkovec TD, Nau SD. Credibility of Analogue Therapy Rationales. J Behav Ther Exp Psychiat. 1972;3:257–260. [Google Scholar]
  • 19.Devilly GJ, Borkovec TD. Psychometric properties of the credibility/expectancy questionnaire. J Behav Ther Exp Psychiat. 2000;31:73–86. doi: 10.1016/s0005-7916(00)00012-4. [DOI] [PubMed] [Google Scholar]
  • 20.Borkovec TD, Costello E. Efficacy of Applied Relaxation and Cognitive-Behavioral Therapy in the Treatment of Generalized Anxiety Disorder. J Consult Clin Psychol. 1993;61:611–619. doi: 10.1037//0022-006x.61.4.611. [DOI] [PubMed] [Google Scholar]
  • 21.Treatment for Adolescents with Depression Study (TADS) Team. Fluoxetine, Cognitive-Behavioral Therapy, and Their Combination for Adolescents with Depression: Treatment for Adolescents with Depression Study (TADS) Randomized Controlled Trial. JAMA. 2004;292:807–820. doi: 10.1001/jama.292.7.807. [DOI] [PubMed] [Google Scholar]

RESOURCES