Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Dec 20.
Published in final edited form as: Adm Policy Ment Health. 2009 May 12;36(5):331–342. doi: 10.1007/s10488-009-0224-0

Preference in Random Assignment: Implications for the Interpretation of Randomized Trials

Cathaleene Macias 1,, Paul B Gold 2, William A Hargreaves 3, Elliot Aronson 4, Leonard Bickman 5, Paul J Barreira 6, Danson R Jones 7, Charles F Rodican 8, William H Fisher 9
PMCID: PMC2796239  NIHMSID: NIHMS144451  PMID: 19434489

Abstract

Random assignment to a preferred experimental condition can increase service engagement and enhance outcomes, while assignment to a less-preferred condition can discourage service receipt and limit outcome attainment. We examined randomized trials for one prominent psychiatric rehabilitation intervention, supported employment, to gauge how often assignment preference might have complicated the interpretation of findings. Condition descriptions, and greater early attrition from services-as-usual comparison conditions, suggest that many study enrollees favored assignment to new rapid-job-placement supported employment, but no study took this possibility into account. Reviews of trials in other service fields are needed to determine whether this design problem is widespread.

Keywords: Research design, Program evaluation, Randomized controlled trial, Evidence-based practice, Supported employment


The validity of research in any field depends on the extent to which studies rule out alternative explanations for findings and provide meaningful explanations of how and why predicted outcomes were attained (e.g., Bickman 1987; Lewin 1943; Shadish et al. 2002; Trist and Sofer 1959). In mental health services research, participants’ expectations about the pros and cons of being randomly assigned to each experimental intervention can offer post hoc explanations for study findings that rival the explanations derived from study hypotheses. Unlike most drug studies that can ‘blind’ participants to their condition assignment, studies that evaluate behavioral or psychosocial interventions typically tell each participant his or her experimental assignment soon after randomization, and being assigned to a non-preferred intervention could be disappointing, or even demoralizing (Shapiro et al. 2002), and thereby reduce participants’ interest in services or motivation to pursue service goals (Cook and Campbell 1979; Shadish 2002). On the other hand, if most participants randomly assigned to one experimental condition believe they are fortunate, this condition may have an unfair advantage in outcome comparisons.

Reasons for preferring assignment to a particular experimental condition can be idiosyncratic and diverse, but as long as each condition is assigned the same percentage of participants who are pleased or displeased with their condition assignment, then there will be no overall pattern of condition preferences that could explain differences in outcomes. The greater threat to a valid interpretation of findings occurs when most study enrollees share a general preference for random assignment to one particular condition. Greater preference for one experimental condition over another could stem from general impressions of relative service model effectiveness, or from information that is tangential, e.g., program location on a main bus route or in a safer area of town. Even if random assignment distributes service preferences in equal proportions across conditions, the less attractive experimental condition will receive a higher percentage of participants who are mismatched to their preference, and the more attractive condition will receive a higher percentage of participants matched to their preference. For example, if 60% of all study enrollees prefer condition A and 40% prefer condition B, then, with true equivalence across conditions, service A would have 60% pleased and 40% disappointed assignees, while service B would have 40% pleased and 60% disappointed assignees.

There is potential to engender a general preference for assignment to a particular experimental intervention whenever a study’s recruitment posters, information sheets, or consent documents depict one intervention as newer or seemingly better, even if no evidence yet supports a difference in intervention effectiveness. For instance, in a supported housing study, if a comparison condition is described as offering familiar ‘services-as-usual’ help with moving into supervised housing, participants might reasonably prefer assignment to a more innovative experimental intervention designed to help individuals find their own independent apartments.

Methodologists have proposed protocol adaptations to the typical randomized trial to measure and monitor the impact of participants’ intervention preferences on study enrollment and engagement in assigned experimental conditions (Braver and Smith 1996; Corrigan and Salzer 2003; Lambert and Wood 2000; Marcus 1997; Staines et al. 1999; TenHave et al. 2003). Nevertheless, few mental health service studies have adopted these design modifications, and even fewer have followed recommendations to measure, report, and, if necessary, statistically control for enrollees’ expressed preferences for assignment to a particular condition (Halpern 2002; King et al. 2005; Shapiro et al. 2002; Torgerson et al. 1996).

In this article, we begin by describing several ways that participants’ preferences for random assignment to a specific service intervention can complicate the interpretation of findings. We then review one field of services research to estimate the prevalence of some of these problems. Obstacles to a valid interpretation of findings include the likelihood of (a) lower service engagement and/or greater early attrition from less-preferred conditions, and (b) similarities among people who refuse or leave a non-preferred program and, hence, condition differences in types of retained participants. Even if all randomized participants do receive assigned services, those who preferred assignment to a certain condition may be unique in ways (e.g., functioning, motivation) that predict outcomes over and above the impact of services, and (c) certain program designs may ameliorate or intensify the effects of disappointment in service assignment. Finally, (d) preference for assignment to one condition over another may reflect a clash between program characteristics (e.g., attendance requirements) and participants’ situational considerations (e.g., time constraints, residential location) so that participants assigned to a non-preferred condition may tend to encounter similar difficulties in attaining outcomes and may choose the same alternative activities. We now discuss each of these issues.

How Participants’ Service Preferences Can Influence Outcomes

Impact of Assignment Preference on Service Engagement and Retention

Research participants who are disappointed in their random assignment to a non-preferred experimental condition may refuse to participate, or else withdraw from assigned services or treatment early in the study (Hofmann et al. 1998; Kearney and Silverman 1998; Laengle et al. 2000; Macias et al. 2005; Shadish et al. 2000; Wahlbeck et al. 2001). If this occurs more often for one experimental condition than another, such differential early attrition can quickly transform a randomized controlled trial into a quasi-experiment (Corrigan and Salzer 2003; Essock et al. 2003; West and Sagarin 2000). Unless participants’ preferences for assignment to experimental interventions are measured prior to randomization, it will be impossible to distinguish the emotional impact on participants of being matched or mismatched to intervention preference from each intervention’s true ability to engage and retain its assigned participants. If participants who reject their service assignments tend to refuse research interviews, the least-preferred intervention may also have a disproportionately higher incidence of ‘false negatives’ (undetected positive outcomes), and this can further bias the interpretation of findings.

Researchers can statistically control for intervention preferences if these attitudes are measured prior to randomization and one intervention is not greatly preferred over another. Even if a study is unable to measure and statistically control participants’ pre-existing preferences for assignment to experimental conditions, statistically adjusting for differential attrition from assigned services can help to rule out disappointment or satisfaction with random assignment as an alternative explanation for findings. However, rather than statistically controlling (erasing) the impact of intervention preferences on service retention and outcomes, it may be far more informative to investigate whether preference in random assignment might have modified a program’s potential to engage and motivate participants (Sosin 2002). For instance, a statistically significant ‘program assignment-by-program preference’ interaction term in a regression analysis (Aguinis 2004; Aiken and West 1991) might reveal a demoralization effect (e.g., a combination of less effort, lower service satisfaction, poorer outcomes) for participants randomly assigned to a comparison condition that was not their preference. A more complex program-by-preference interaction analysis might reveal that an assertive program is better at engaging and retaining consumers who are disappointed in their service assignment, while a less assertive program, when it is able to hang onto its disappointed assignees, is better at helping them attain service goals (Delucchi and Bostrom 2004; Lachenbruch 2002). Ability to engage and retain participants is a prerequisite for effectiveness, but, in the same way that medication compliance is distinguished from medication efficacy in pharmaceutical trials, service retention should not be confused with the impact of services received (Little and Rubin 2000).

Similarities Between People with the Same Preferences

Even if rates of early attrition are comparable across study conditions, experimental groups may differ in the types of people who decide to accept assigned services (Magidson 2000). If participants who reject a service intervention resemble one another in some way, then the intervention samples of service-active participants will likely differ on these same dimensions.

As yet, we know very little about the effectiveness of different types of community interventions for engaging various types of consumers (Cook 1999a, b; Mark et al. 1992), but mobile services and programs that provide assertive community outreach appear to have stronger engagement and retention, presumably because staff schedule and initiate most service contacts on a routine basis (McGrew et al. 2003). If these program characteristics match participants’ reasons for preferring one experimental condition over another, then a bias can exist whether or not intervention preference is balanced across conditions. For instance, consumers who are physically disabled, old, or agoraphobic may prefer home-based service delivery and are likely to be disappointed if assigned to a program that requires regular attendance. Greater retention of these more disabled individuals could put a mobile intervention at a disadvantage in a direct evaluation of service outcomes, like employment, that favor able-bodied, younger, or less anxious individuals. On the other hand, in rehabilitation fields like supported housing, education, or employment that depend strongly on consumer initiative and self-determination, higher functioning or better educated consumers may drop out of control conditions because they are not offered needed opportunities soon enough (Shadish 2002). This was evident in a recent study of supported housing (McHugo et al. 2004), which reported a higher proportion of ‘shelter or street’ homeless participants in the control condition relative to a focal supported housing condition, presumably because participants who were more familiar with local services (e.g., those temporarily homeless following eviction or hospital discharge) considered the control condition services inadequate and sought housing on their own outside the research project.

Service model descriptions and intervention theories suggest many interactions between program characteristics and participant preferences that could be tested as research hypotheses if proposed prior to data analysis. Unfortunately, such hypotheses are rarely formulated and tested.

It is also rare for a randomized trial to compare experimental interventions on sample characteristics at a point in time later than baseline, after every participant has had an opportunity to accept or reject his or her experimental assignment, so that sample differences that emerge early in the project can be statistically controlled in outcome analyses.

Interaction Between Responses to Service Assignment and Service Characteristics

A more subtle threat to research validity exists whenever participants disappointed in their intervention assignment do not drop out of services, but instead remain half-heartedly engaged (Corrigan and Salzer 2003). Participants randomized to a preferred intervention are likely to be pleased and enthusiastic, ready to engage with service providers, while those randomized to a non-preferred intervention are more likely to be disappointed and less motivated to succeed. However, the strength of participant satisfaction or disappointment in service assignment can vary greatly depending on service program characteristics (Brown et al. 2002; Calsyn et al. 2000; Grilo et al. 1998; Macias et al. 2005; Meyer et al. 2002). For instance, in a randomized comparison of assertive community treatment (PACT) to a certified clubhouse (Macias et al. 2009), we found that being randomly assigned to the less preferred program decreased service engagement more often in the clubhouse condition than in PACT. However, clubhouse members who had not wanted this service assignment, but nevertheless used clubhouse services to find a job, ended up employed longer and were more satisfied with services than other study enrollees. Study hypotheses based on program differences in staff assertiveness (PACT) and consumer self-determination (clubhouse) predicted this rare three-way interaction prior to data collection, and offer a theory-based (dissonance theory; Aronson 1999; Festinger 1957) explanation of the complex finding. Presumably, clubhouse members not wanting assignment to this service needed to rationalize their voluntary participation in a non-preferred program by viewing the clubhouse as a means-to-an-end. They tried harder than usual to get a job and stay employed, and gave the clubhouse some credit for their personal success. By contrast, PACT participants who had not wanted this service assignment could credit assertive program staff for keeping them involved, so they experienced less cognitive dissonance and had less need to justify their continued receipt of a non-preferred service. Whether being assigned to a non-preferred program turns out to have negative or positive consequences can depend on a complex interplay between participant motivation and program characteristics. The generation of useful hypotheses for any mental health service trial depends on thoughtful reflection on experimental program differences, as well as familiarity with research in disciplines that study human motivation, such as psychiatry, social psychology, and advertising (Krause and Howard 2003).

Alternative Outcomes Related to Service Preferences

If participants who prefer a certain service condition share similar characteristics, they may also share similar life circumstances and make similar life choices. Individuals who have the same personal responsibilities or physical limitations may prefer not to be assigned to a particular intervention because they cannot fully comply with the requirements for participation, even if they try to do so. For instance, some research participants may have difficulty with regular program attendance because they have competing time commitments, such as caring for an infant or seriously ill relative, or attending school to pursue work credentials (Collins et al. 2000; Mowbray et al. 1999; Wolf et al. 2001). These productive alternative activities could also compete with the research study’s targeted outcomes, and be misinterpreted as outcome ‘failures.’ For instance, in supported employment trials, unemployment should not be considered a negative outcome if the individual is attending college or pursuing job-related training, or if she has chosen to opt out of the job market for a while to take care of small children or an ill or handicapped relative. These alternative pursuits will be coded simply as ‘unemployed,’ and interpreted as program failure, unless they are tracked and reported as explanations for why work was not obtained. For this reason, it is important to examine relationships between participant circumstances and service preferences at the outset of a study to identify what additional life events and occupations might need to be documented to fully explain intervention outcome differences.

Scope of the Assignment Preference Problem

Regardless of the reason for research participant preference in random assignment, condition differences in service attractiveness can be statistically controlled if (a) preference is measured prior to randomization and (b) if there is sufficient variability in preferences so that the vast majority of study enrollees do not prefer the same intervention. Unfortunately, most randomized service trials have neither measured pre-randomization service preference nor taken it into account when comparing intervention outcomes. Therefore, it is important to assess whether undetected participant preference in random assignment might have existed in published randomized trials, and, if so, whether it might have compromised the interpretation of findings.

As a case example, we review the empirical support for one evidence-based practice, supported employment for adults with severe mental illness, to obtain a qualitative estimate of the extent to which unmeasured service preference for a focal intervention might offer an alternative explanation for published findings. Supported employment offers an ideal starting point for our inquiry given its extensive body of research, which includes a $20 million multi-site randomized study (EIDP, Cook et al. 2002), and consensus among psychiatric rehabilitation stakeholders that supported employment is an evidence-based practice ready for dissemination and implementation (Bond et al. 2001). Consumer receptivity and participation in supported employment has been studied in depth through ethnographies (Alverson et al. 1998; Alverson et al. 1995; Quimby et al. 2001), structured interviews (McQuilken et al. 2003; Secker et al. 2002), and personal essays (Honey 2000), and these publications suggest that most consumers know what they need and should expect from a quality vocational program. For this reason, consumer service preferences should be a salient consideration in the design of supported employment research.

Sample of Randomized Trials of Supported Employment

The evidence base for supported employment effectiveness consists of a series of randomized controlled studies of the Individual Placement and Support (IPS) service model (Bond et al. 1997, 2001). One reason research on supported employment has been restricted to a single service delivery model is the ready availability of standardized IPS training and fidelity measures (Bond et al. 2002; McGrew and Griss 2005). As a result of a substantial body of research evidence that IPS produces good employment outcomes, this service model has become synonymous with ‘supported employment’ in much of the psychiatric rehabilitation literature (Bond et al. 1997; Crowther et al. 2001; Drake et al. 2003), and many state departments of mental health in the United States now endorse a definition of supported employment as Individual Placement and Support (IPS).

Table 1 presents a recently published list of all randomized controlled trials of supported employment programs recognized as having high fidelity to Individual Placement and Support (IPS) by the designers of this service delivery model (Bond et al. 2008). Every study has the IPS model as its focal intervention, and IPS experts provided staff training and verified the fidelity of each focal intervention using a supported employment (IPS) fidelity scale (Bond et al. 2008). Research study eligibility was generally limited to unemployed individuals with severe mental illness who had an expressed interest in finding a mainstream job. Most study samples had a mean age of about 40, except that the Twamley et al. (2008) sample was older (M = 50 years) and the Killackey et al. (2008) sample was younger (M = 21 years). Except for the study by Lehman et al. (2002), all studies discouraged enrollment of participants who had major physical limitations or substance use problems.

Table 1.

Randomized trials of high fidelity IPS supported employment: indicators of possible participant preference in condition assignment

RCT study/ location Comparison condition(s) Comparison condition description Voc service retentiona Research study retentionb
Drake et al. 1996 New Hampshire, USA Job skills training Boston ‘choose-get-keep’ model / ‘pre-employment skills training in a group format’ √2 months 18 months
E: 100% E: 99%
C: 62% C: 97%
Drake et al. 1999 Washington, DC USA Sheltered workshop ‘several well-established agencies’/‘primarily paid work adjustment training in a sheltered workshop’ 2 months 18 months
E: 95% 99% total sample
C: 84%
Lehman et al. 2002 Maryland, USA Psychosocial rehabilitation program ‘in-house skill training, sheltered work, factory enclaves’ ‘socialization, education, housing’ √ any voc service 24 months
E: 93% E: 74%
C: 33% C: 60%
Mueser et al. 2004 Connecticut, USA Multiple sites:
  1. Brokered SE

  2. Psychosocial rehabilitation

1. ‘standard vocational services’ 2. typical ‘PSR center’ providing ‘social, recreational, educational, & vocational’ services,’ e.g., skills training, program-owned jobs. √ a few weeks 24 months
E: 90% E: 96%
C: 50% C: 98%
Gold et al. 2006 South Carolina, USA Sheltered workshop ‘traditional vocational rehabilitation’ ‘staff-supervised set-aside jobs’ 6 months 24 months
E: 86% E: 82%
C: 83% C: 70%
Latimer et al. 2006 Quebec Canada Traditional vocational services “sheltered workshop, creative workshops, client-run boutique and horticulture;’ ‘job-finding skills training;’ government sponsored set-aside jobs √ 6 months 12 months
E: 91% E: 79%
C: 30% C: 89%
Bond et al. 2007 Indiana, USA ‘Diversified placement’ at Thresholds, Inc. ‘existing Thresholds services’ ‘prevocational work crews,’ ‘groups,’ temporary set-aside work √ 6 months 24 months
E: 82% 97% total sample
C: 65%
Burns et al. 2007 6 Nations Europe Traditional, ‘typical and dominant’ voc rehab service Daily ‘structured training combating deficits,’ ‘time structuring,’ and computer skills, usually provided in a ‘day centre’ √ any voc service 18 months
E: 100% E: 100%
C: 76% C: 100%
Wong et al. 2008 Hong Kong, China Stepwise conventional voc services ‘Occupational Therapy Department of local hospital’ ‘work groups in a simulated environment’ 18 months 18 months:
E: 100% E: 100%
C: 100% C: 98%
Twamley et al. 2008 California, USA Conventional voc rehab referrals Dept of Rehab referral to ‘job readiness coaching’ and ‘prevocational classes’ √ any voc service 12 months
E: 100% E: 79%
C: 41% C: 77%
Killackey et al. 2008 Victoria, Australia Traditional vocational services ‘treatment-as-usual’ referral to voc agency with ‘vocationally oriented group programme’ √ 6 months: 6 months
E: 95% E: 100%
C: 76% C: 100%

As reported in the IPS review article by Bond et al. (2008), or in these original study publications

a

Checkmarks indicate differential attrition defined as 15% or greater service retention for IPS versus control condition

b

Percentage of study participants who had employment data

Possible Indicators of Differential Service Preference

Table 1 lists verbatim service descriptions of the comparison condition in each of the eleven original study publications, along with condition labels derived from the Bond et al. (2008) review of these same randomized trials. Although we do not know the language used to describe the service interventions in recruitment flyers or induction meetings, we assumed there was a strong possibility that most study enrollees would prefer assignment to a new IPS program whenever the comparison condition was an existing sheltered workshop, traditional psychosocial rehabilitation program, or conventional vocational rehabilitation that had been routinely offered by the state or a local authority over several previous years. Since all study enrollees had an expressed interest in obtaining competitive work, we also assumed the possibility of greater preference for IPS if the comparison condition were designed primarily to provide non-competitive jobs, or if program activities delayed entry into competitive employment. Most studies (8 of 11) reported mandatory attendance of all study applicants at one or more research project induction groups in which the experimental conditions were described and questions answered (Drake et al. 1994).

Next, we documented whether each study reported greater early service attrition, or a lower service engagement, for its comparison condition. We report the percentage of study enrollees who were ever active in assigned services at the earliest post-randomization point reported in the original publication or in the summary review article by Bond et al.(2008). We chose the earliest report period so that it would be reasonable to attribute low service contact to disappointment in service assignment. Early service attrition can also be attributed to service ineffectiveness (e.g., poor outreach, slow development of staff-client relationships, or lack of immediate efforts to help participants get a job), so we assume that lower engagement in comparison services is a probable, but not conclusive indication that a comparison condition was generally less appealing than IPS. Our assumption that disappointment in service assignment is a reasonable explanation for early service attrition is based on a demonstrated temporal relationship between random assignment to a non-preferred intervention and subsequently low rates of service engagement within two very different supported employment interventions that had comparable employment rates (Macias et al. 2005).

We also provide research study retention rates for each condition at the latest measurement point as a check on the possibility that loss of participants from services was attributable to the same causes that prevented participation in research interviews and/or the tracking of study outcomes. If research study retention rates at a later point in time are as good or better than service intervention retention rates at an earlier time point, we will assume that factors that typically restrict or enhance research study participation (e.g., program differences in outcome tracking, deaths, hospitalizations, residential mobility) do not account for early differential attrition from experimental and control conditions.

We will consider a study to be at high risk for misinterpretation of findings if the condition labels or descriptions were less favorable for the comparison condition(s), and if there is greater early attrition from comparison services in spite of high research retention.

Review Findings

Descriptions of Comparison Conditions

The comparison condition for every study listed in Table 1 was a pre-existing conventional or traditional vocational rehabilitation service that would have been familiar to many participants and did not prioritize rapid placement into mainstream employment. By contrast, each IPS program was a new intervention introduced to the local service system through the research project that was designed to offer fast entry into mainstream work. Although no study recorded participants’ service assignment preference prior to research enrollment or randomization, we might reasonably assume that, in some studies, satisfaction with service assignment to IPS, or disappointment in assignment to the non-supported employment comparison condition, contributed to differences in mainstream employment rates between experimental conditions.

Differential Early Attrition/Retention

Six of the eleven studies reported a 20% or greater advantage in service retention for the focal IPS intervention within the first 8 weeks following randomization. Two other studies that assessed service retention at the 6-months point in the research project reported 17 and 19% differences in favor of IPS. Only the South Carolina and Hong Kong studies (Gold et al. 2006; Wong et al. 2008) reported comparably high rates of service retention across experimental interventions, possibly because both studies required all participants to be active in a local mental health program at the time of research study enrollment.

Overall, the majority of participants remained active in each research study for the duration of the trial, with comparable research retention across study conditions. This comparability suggests that factors known to increase research attrition (e.g., residential mobility, chronic illness) cannot explain early differential attrition from services.

IPS interventions may have had better service retention rates in eight of these eleven randomized trials because IPS had more assertive outreach, provided more useful services, or IPS staff collaborated more closely with clinicians than staff in the comparison conditions (Bond et al. 2008; Gold et al. 2006; McGurk et al. 2007). However, greater intensity or quality of IPS services cannot reasonably account for the very low service retention rates for most comparison conditions relative to research project retention, so disappointment in assignment remains a credible additional explanation for greater early attrition from comparison services.

Only the South Carolina study statistically controlled for variation in participant exposure to vocational services, which might be considered a proxy for the effects of differential attrition attributable to service preference. No study reported whether early attrition resulted in the loss of different types of people from each study condition, and every study compared study conditions on participant characteristics only at baseline.

Discussion

Our review of research in one dominant field of adult psychiatric rehabilitation reveals that every randomized controlled trial of high-fidelity supported employment had a ‘services-as-usual’ comparison condition that might have predisposed work-interested participants to prefer random assignment to the new ‘rapid-job-placement’ IPS intervention. We cannot be certain that IPS was preferred by most participants over comparison conditions in any of these studies because no study measured participants’ pre-randomization service preferences or satisfaction with condition assignment. However, neither does any original publication offer evidence that would refute our assumption of greater preference for IPS. Eight of these 11 studies reported 15% or greater service attrition from the comparison condition early in the project that could reflect disappointment in service assignment, but no study reporting early differential attrition statistically controlled for exposure to services, examined how attrition might have changed sample characteristics, or distinguished between service retention and outcome attainment in data analyses.

We cannot conclude that the outcomes for any of these eleven studies would differ from the reported findings if service preference, service receipt, or the effects of early attrition on sample characteristics had been measured and, assuming sufficient variability in these measures, intervention differences had been statistically controlled. Moreover, design factors other than program descriptions provided in study advertisements, research induction sessions, or consent documents might have engendered a general preference for assignment to IPS. For instance, in the Bond et al. (2007) study, IPS services were located at the same health center that provided most participants’ clinical care, while comparison services were off-site, and so condition differences in service convenience could also explain better retention rates and outcomes for IPS. Regardless, the published labels and descriptions of comparison interventions presented in Table 1, and early condition differences in service retention rates, suggest the possibility that outcome differences between study conditions that consistently favor IPS might be partially explained by corresponding differences in participant expectations about services, and, ultimately, satisfaction or disappointment in service assignment. If these same research designs problems are prevalent in other fields of mental health services research, we need to consider what widespread impact these alternative explanations may have had on the interpretation of research findings.

Variability in Impact of Participant Preferences on Outcomes

Unmeasured participant preference in random assignment may not pose the same threat in other service trials, even if informed consent procedures are similar to those used in these supported employment trials, and even if service descriptions inadvertently induce a general preference for one intervention over another. The direct impact of service preference on outcomes may depend a great deal on whether the primary study outcome is measured subjectively or objectively, and on the type of intervention under evaluation, including its frequency, intensity, or duration (Torgerson and Moffett 2005). Moreover, if study outcomes do not depend on participant attitudes or motivation, then disappointment in service assignment may have no impact on outcomes at all.

A mismatch to service preference is likely to have the strongest impact on study outcomes whenever participants are expected to improve their own lives in observable ways that demand strong commitment and self-determination, as is the case for supported employment. By contrast, the impact of a mismatch to service preference on outcomes is probably least discernable when participation is passive or condition assignment remains unknown, as is the case in most drug and medical treatment trials (King et al. 2005; Leykin et al. 2007). Whether disappointment in service assignment reduces or enhances outcomes may also depend on prevailing attitudes toward cooperation with service professionals (Nichols and Maner 2008) or perceived pressure to participate from program staff (Macias et al. 2009). However, the impact of service preference on outcomes should almost always be strong when the reason for preferring an intervention is based on expectations of relative efficacy, since even medication trials have shown better outcomes when participants believe a drug will be efficacious (Krell et al. 2004), as well as worse outcomes when they suspect a drug is a placebo (Sneed et al. 2008).

Research reviews are needed to estimate the potential impact of unmeasured service preference in other service fields, and to identify moderating variables that deserve further study. Until the relative threat of participant service preference can be determined for a specific field, pre-randomization service preference should be measured routinely in every randomized controlled trial and, if there is sufficient variability in preference measurement, condition differences in preference should be statistically controlled, and tests of interaction effects conducted to identify moderating variables. Examples of statistical control for service preference in logistic regression and event history analysis can be found in reports on a supported employment randomized trial that compared two SE interventions (Macias et al. 2005; Macias et al. 2006). A third publication from this same trial illustrates a theory-driven test of moderation effects (Macias et al. 2009). However, whenever one experimental condition is greatly preferred over another, there is no statistical remedy that will allow an unbiased comparison of outcomes.

New Directions for Employment Research

The body of research on supported employment (SE) offers compelling evidence that most adults with severe mental illness do not find prevocational training or standard vocational rehabilitation attractive routes to mainstream employment (Cook 1999a, b; Noble et al. 1997). It may be time to relinquish ‘SE vs. no SE’ research designs that evoke preference for assignment to SE and move on to compare different ways of delivering the same high quality SE job-hunting services and on-the-job supports (Bickman 2002; Lavori 2000). Comparisons of alternative modalities of the same service typically provide less striking, statistically weaker contrasts in outcomes, but they preserve the ethical principle of equipoise and help to ensure that all participants receive adequate care and comparable opportunities for life improvement (Lavori et al. 2001; Lilford and Jackson 1995; Schwartz and Sprangers 1999).

We would learn more about why supported employment is effective, and what aspects of SE are most attractive to prospective research participants, if studies would provide more detailed descriptions of service implementation so that the same key concepts (e.g., rapid job placement, service integration, frequent contact) could be compared across separate studies and in meta-analyses (Campbell and Fiske 1959; Sechrest et al. 2000; TenHave et al. 2003). Such studies would also help to justify specificity in fidelity measurement during dissemination and implementation of evidence-based practices (Rosenheck 2001; West et al. 2002). It would be especially advantageous to compare ways to increase access to mainstream work in specific service environments, since the heterogeneity in IPS employment rates, internationally and across the USA, suggests that social, political, economic, and organizational factors are far greater predictors of the work attainment of disabled individuals than receipt of employment services, or even disability itself.

Conclusions

The randomized controlled trial is still the gold standard of research designs (Cook 1999a, b), and randomization greatly strengthens causal inference (Abelson 1995; Beveridge 1950). However, cause-effect inference depends on the measurement of all plausibly potent causal factors, including study participants’ attitudes toward their assigned interventions. Ironically, while consumer advocates champion the individual’s right to choose services, researchers rarely examine the contribution of consumer self-direction to outcomes considered indicative of service effectiveness. It may well be a legitimate responsibility of institutional review boards to assess the potential impact of study designs and research enrollment documents on participants’ preferences in random assignment and, hence, their eventual well-being as research participants and role in determining study outcomes (Adair et al. 1983).

Our review of one prominent field of mental health services research suggests a general need to reexamine published randomized controlled trials to gauge the extent to which research protocols or descriptions of experimental conditions might have predisposed participants to prefer assignment to one particular condition over another, and whether participant responses to these research design elements might have moderated, or even mediated, service effectiveness.

Acknowledgments

Work on this article was funded by National Institute of Mental Health grants to the first and second authors (MH62628; MH01903). We are indebted to Ann Hohmann, Ph.D. for her supportive monitoring of the NIMH research grant that fostered this interdisciplinary collaboration, and to anonymous reviewers who offered invaluable insights during manuscript preparation.

Contributor Information

Cathaleene Macias, Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, cmacias@mclean.harvard.edu.

Paul B. Gold, Department of Counseling and Personnel Services, University of Maryland, College Park, MD 20742, USA, pgold@umd.edu

William A. Hargreaves, Department of Psychiatry, University of California, San Francisco, CA, USA, billharg@comcast.net

Elliot Aronson, Department of Psychology, University of California, Santa Cruz, CA, USA, elliot@CATS.ucsc.edu.

Leonard Bickman, Center for Evaluation and Program Improvement, Vanderbilt University, Nashville, TN, USA, leonard.bickman@vanderbilt.edu.

Paul J. Barreira, Harvard University Health Services, Harvard University, Boston, MA, USA, pbarreira@uhs.harvard.edu

Danson R. Jones, Institutional Research, Wharton County Junior College, Wharton, TX 77488, USA, jonesd@wcjc.edu

Charles F. Rodican, Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, crodican@mclean.harvard.edu

William H. Fisher, Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA, bill.fisher@umassmed.edu

References

  1. Abelson RP. Statistics as principled argument. Hillsdale, NJ: Lawrence Erlbaum; 1995. [Google Scholar]
  2. Adair JG, Lindsay RCL, Carlopio J. Social artifact research and ethical regulation: Their impact on the teaching of experimental methods. Teaching of Psychology. 1983;10:159–162. doi: 10.1207/s15328023top1003_10. [DOI] [PubMed] [Google Scholar]
  3. Aguinis H. Regression analysis for categorical moderators. New York: Guilford Press; 2004. [Google Scholar]
  4. Aiken LS, West SG. Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage; 1991. [Google Scholar]
  5. Alverson H, Alverson M, Drake RE, Becker DR. Social correlates of competitive employment among people with severe mental illness. Psychosocial Rehabilitation Journal. 1998;22(1):34–40. [Google Scholar]
  6. Alverson M, Becker DR, Drake RE. An ethnographic study of coping strategies used by people with severe mental illness participating in supported employment. Psychosocial Rehabilitation Journal. 1995;18(4):115–127. [Google Scholar]
  7. Aronson E. The power of self-persuasion. The American Psychologist. 1999;54(11):873–875. doi: 10.1037/h0088188. [DOI] [Google Scholar]
  8. Beveridge WIB. The art of scientific investigation. New York: Vintage Books; 1950. [Google Scholar]
  9. Bickman L. The functions of program theory. In: Bickman L, editor. Using program theory in evaluation. San Francisco: Jossey-Bass; 1987. [Google Scholar]
  10. Bickman L. The death of treatment as usual: An excellent first step on a long road. Clinical Psychology: Science and Practice. 2002;9(2):195–199. doi: 10.1093/clipsy/9.2.195. [DOI] [Google Scholar]
  11. Bond GR, Becker DR, Drake RE, Rapp C, Meisler N, Lehman AF. Implementing supported employment as an evidence-based practice. Psychiatric Services. 2001a;52(3):313–322. doi: 10.1176/appi.ps.52.3.313. [DOI] [PubMed] [Google Scholar]
  12. Bond GR, Becker DR, Drake RE, Vogler K. A fidelity scale for the individual placement and support model of supported employment. Rehabilitation Counseling Bulletin. 1997a;40(4):265–284. [Google Scholar]
  13. Bond GR, Campbell K, Evans LJ, Gervey R, Pascaris A, Tice S, et al. A scale to measure quality of supported employment for persons with severe mental illness. Journal of Vocational Rehabilitation. 2002;17(4):239–250. [Google Scholar]
  14. Bond GR, Drake R, Becker D. An update on randomized controlled trials of evidence-based supported employment. Psychiatric Rehabilitation Journal. 2008a;31(4):280–290. doi: 10.2975/31.4.2008.280.290. [DOI] [PubMed] [Google Scholar]
  15. Bond GR, Drake RE, Mueser KT, Becker DR. An update on supported employment for people with severe mental illness. Psychiatric Services. 1997b;48(3):335–346. doi: 10.1176/ps.48.3.335. [DOI] [PubMed] [Google Scholar]
  16. Bond GR, McHugo GH, Becker D, Rapp CA, Whitley R. Fidelity of supported employment: Lessons learned from the national evidence-based practice project. Psychiatric Rehabilitation Journal. 2008b;31(4):300–305. doi: 10.2975/31.4.2008.300.305. [DOI] [PubMed] [Google Scholar]
  17. Bond GR, Salyers MP, Roudebush RL, Dincin J, Drake RE, Becker DR, et al. A randomized controlled trial comparing two vocational models for persons with severe mental illness. Journal of Consulting and Clinical Psychology. 2007;75(6):968–982. doi: 10.1037/0022-006X.75.6.968. [DOI] [PubMed] [Google Scholar]
  18. Bond GR, Vogler K, Resnick SG, Evans L, Drake R, Becker D. Dimensions of supported employment: Factor structure of the IPS fidelity scale. Journal of Mental Health. 2001b;10(4):383–393. doi: 10.1080/09638230120041146. [DOI] [Google Scholar]
  19. Braver SL, Smith MC. Maximizing both external and internal validity in longitudinal true experiments with voluntary treatments: The ‘combined modified’ design. Evaluation and Program Planning. 1996;19:287–300. doi: 10.1016/S0149-7189(96)00029-8. [DOI] [Google Scholar]
  20. Brown TG, Seraganian P, Tremblay J, Annis H. Matching substance abuse aftercare treatments to client characteristics. Addictive Behaviors. 2002;27:585–604. doi: 10.1016/S0306-4603(01)00195-2. [DOI] [PubMed] [Google Scholar]
  21. Burns T, Catty J, Becker T, Drake RE, Fioritti A, Knapp M, et al. The effectiveness of supported employment for people with severe mental illness: A randomised controlled trial. Lancet. 2007;370:1146–1152. doi: 10.1016/S0140-6736(07)61516-5. [DOI] [PubMed] [Google Scholar]
  22. Calsyn R, Winter J, Morse G. Do consumers who have a choice of treatment have better outcomes? Community Mental Health Journal. 2000;36(2):149–160. doi: 10.1023/A:1001890210218. [DOI] [PubMed] [Google Scholar]
  23. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin. 1959;56:81–105. doi: 10.1037/h0046016. [DOI] [PubMed] [Google Scholar]
  24. Collins ME, Mowbray C, Bybee D. Characteristics predicting successful outcomes of participants with severe mental illness in supported education. Psychiatric Services. 2000;51(6):774–780. doi: 10.1176/appi.ps.51.6.774. [DOI] [PubMed] [Google Scholar]
  25. Cook JA. Understanding the failure of vocational rehabilitation: What do we need to know and how can we learn it? Journal of Disability Policy Studies. 1999a;10(1):127–132. [Google Scholar]
  26. Cook TD. Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education. Paper presented at the Harvard Faculty Seminar on Experiments in Education; Cambridge, MA. 1999b. [Google Scholar]
  27. Cook TD, Campbell DT. Quasi-experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin; 1979. [Google Scholar]
  28. Cook JA, Carey MA, Razzano L, Burke J, Blyler CR. The pioneer: The employment intervention demonstration program. New Directions for Evaluation. 2002;94:31–44. doi: 10.1002/ev.49. [DOI] [Google Scholar]
  29. Corrigan PW, Salzer MS. The conflict between random assignment and treatment preference: Implications for internal validity. Evaluation and Program Planning. 2003;26:109–121. doi: 10.1016/S0149-7189(03)00014-4. [DOI] [PubMed] [Google Scholar]
  30. Crowther RE, Marshall M, Bond GR, Huxley P. Helping people with severe mental illness to obtain work: Systematic review. BMJ: British Medical Journal. 2001;322(7280):204–208. doi: 10.1136/bmj.322.7280.204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Delucchi KL, Bostrom A. Methods for analysis of skewed data distributions in psychiatric clinical studies: Working with many zero values. The American Journal of Psychiatry. 2004;161(7):1159–1168. doi: 10.1176/appi.ajp.161.7.1159. [DOI] [PubMed] [Google Scholar]
  32. Drake RE, Becker DR, Anthony WA. A research induction group for clients entering a mental health services research project. Hospital & Community Psychiatry. 1994;45(5):487–489. doi: 10.1176/ps.45.5.487. [DOI] [PubMed] [Google Scholar]
  33. Drake RE, Becker D, Bond GR. Recent research on vocational rehabilitation for persons with severe mental illness. Current Opinion in Psychiatry. 2003;16:451–455. doi: 10.1097/00001504-200307000-00012. [DOI] [Google Scholar]
  34. Drake RE, McHugo GJ, Bebout RR, Becher DR, Harris M, Bond GR, et al. A randomized clinical trial of supported employment for inner-city patients with severe mental disorders. Archives of General Psychiatry. 1999;56:627–633. doi: 10.1001/archpsyc.56.7.627. [DOI] [PubMed] [Google Scholar]
  35. Drake RE, McHugo GJ, Becker D, Anthony WA, Clark RE. The new Hampshire study of supported employment for people with severe mental illness. Journal of Consulting and Clinical Psychology. 1996;64(2):391–399. doi: 10.1037/0022-006X.64.2.391. [DOI] [PubMed] [Google Scholar]
  36. Essock SM, Drake R, Frank RG, McGuire TG. Randomized controlled trials in evidence-based mental health care: Getting the right answer to the right question. Schizophrenia Bulletin. 2003;29(1):115–123. doi: 10.1093/oxfordjournals.schbul.a006981. [DOI] [PubMed] [Google Scholar]
  37. Festinger L. A theory of cognitive dissonance. Stanford, CA: Stanford University Press; 1957. [Google Scholar]
  38. Gold PB, Meisler N, Santos AB, Carnemolla MA, Williams OH, Keleher J. Randomized trial of supported employment integrated with assertive community treatment for rural adults with severe mental illness. Schizophrenia Bulletin. 2006;32(2):378–395. doi: 10.1093/schbul/sbi056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Grilo CM, Money R, Barlow DH, Goddard AW, Gorman JM, Hofmann SG, et al. Pretreatment patient factors predicting attrition from a multicenter randomized controlled treatment study for panic disorder. Comprehensive Psychiatry. 1998;39(6):323–332. doi: 10.1016/S0010-440X(98)90043-8. [DOI] [PubMed] [Google Scholar]
  40. Halpern SD. Prospective preference assessment: a method to enhance the ethics and efficiency of randomized controlled trials. Controlled Clinical Trials. 2002;23:274–288. doi: 10.1016/S0197-2456(02)00191-5. [DOI] [PubMed] [Google Scholar]
  41. Hofmann SG, Barlow DH, Papp LA, Detweiler MF, Ray SE, Shear MK, et al. Pretreatment attrition in a comparative treatment outcome study on panic disorder. The American Journal of Psychiatry. 1998;155(1):43–47. doi: 10.1176/ajp.155.1.43. [DOI] [PubMed] [Google Scholar]
  42. Honey A. Psychiatric vocational rehabilitation: Where are the customers’ views. Psychiatric Rehabilitation Journal. 2000;23(3):270–279. doi: 10.1037/h0095092. [DOI] [PubMed] [Google Scholar]
  43. Kearney C, Silverman W. A critical review of pharmacotherapy for youth with anxiety disorders: Things are not as they seem. Journal of Anxiety Disorders. 1998;12(2):83–102. doi: 10.1016/S0887-6185(98)00005-X. [DOI] [PubMed] [Google Scholar]
  44. Killackey E, Jackson HJ, McGorry PD. Vocational intervention in first-episode psychosis: Individual placement and support versus treatment as usual. The British Journal of Psychiatry. 2008;193:114–120. doi: 10.1192/bjp.bp.107.043109. [DOI] [PubMed] [Google Scholar]
  45. King M, Nazareth I, Lampe F, Bower P, Chandler M, Morou M, et al. Impact of participant and physician intervention preferences on randomized trials: A systematic review. Journal of the American Medical Association. 2005;293(9):1089–1099. doi: 10.1001/jama.293.9.1089. [DOI] [PubMed] [Google Scholar]
  46. Krause MS, Howard KI. What random assignment does and does not do. Journal of Clinical Psychology. 2003;59:751–766. doi: 10.1002/jclp.10170. [DOI] [PubMed] [Google Scholar]
  47. Krell HV, Leuchter AF, Morgan M, Cook IA, Abrams M. Subject expectations of treatment effectiveness and outcome of treatment with an experimental antidepressant. The Journal of Clinical Psychiatry. 2004;65(9):1174–1179. doi: 10.4088/jcp.v65n0904. [DOI] [PubMed] [Google Scholar]
  48. Lachenbruch PA. Analysis of data with excess zeros. Statistical Methods in Medical Research. 2002;11:297–302. doi: 10.1191/0962280202sm289ra. [DOI] [PubMed] [Google Scholar]
  49. Laengle G, Welte W, Roesger U, Guenthner A, U’Ren R. Chronic psychiatric patients without psychiatric care: A pilot study. Social Psychiatry and Psychiatric Epidemiology. 2000;35(10):457–462. doi: 10.1007/s001270050264. [DOI] [PubMed] [Google Scholar]
  50. Lambert MF, Wood J. Incorporating patient preferences into randomized trials. Journal of Clinical Epidemiology. 2000;53:163–166. doi: 10.1016/S0895-4356(99)00146-8. [DOI] [PubMed] [Google Scholar]
  51. Latimer EA, LeCompte MD, Becker DR, Drake RE, Duclos I, Piat M, et al. Generalizability of the individual placement and support model of supported employment: Results of a canadian randomised controlled trial. The British Journal of Psychiatry. 2006;189:65–73. doi: 10.1192/bjp.bp.105.012641. [DOI] [PubMed] [Google Scholar]
  52. Lavori PW. Placebo control groups in randomized treatment trials: A statistician’s perspective. Biological Psychiatry. 2000;47:717–723. doi: 10.1016/S0006-3223(00)00838-6. [DOI] [PubMed] [Google Scholar]
  53. Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, et al. Strengthening clinical effectiveness trials: Equipoise-stratified randomization. Biological Psychiatry. 2001;50(10):792–801. doi: 10.1016/S0006-3223(01)01223-9. [DOI] [PubMed] [Google Scholar]
  54. Lehman AF, Goldberg RW, Dixon LB, NcNary S, Postrado L, Hackman A, et al. Improving employment outcomes for persons with severe mental illnesses. Archives of General Psychiatry. 2002;59(2):165–172. doi: 10.1001/archpsyc.59.2.165. [DOI] [PubMed] [Google Scholar]
  55. Lewin K. Forces behind food habits and methods of change. Bulletin of the National Research Council. 1943;108:35–65. [Google Scholar]
  56. Leykin Y, DeRubeis RJ, Gallop R, Amsterdam JD, Shelton RC, Hollon SD. The relation of patients’ treatment preferences to outcome in a randomized clinical trial. Behavior Therapy. 2007;38:209–217. doi: 10.1016/j.beth.2006.08.002. [DOI] [PubMed] [Google Scholar]
  57. Lilford R, Jackson J. Equipoise and the ethics of randomisation. Journal of the Royal Society of Medicine. 1995;88:552–559. [PMC free article] [PubMed] [Google Scholar]
  58. Little RJ, Rubin DB. Causal effects in clinical and epidemiological studies via potential outcomes: Concepts and analytical approaches. Annual Review of Public Health. 2000;21:121–145. doi: 10.1146/annurev.publhealth.21.1.121. [DOI] [PubMed] [Google Scholar]
  59. Macias C, Aronson E, Hargreaves W, Weary G, Barreira P, Harvey JH, et al. Transforming dissatisfaction with services into self-determination: A social psychological perspective on community program effectiveness. Journal of Applied Social Psychology. 2009;39(7) doi: 10.1111/j.1559-1816.2009.00506.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Macias C, Barreira P, Hargreaves W, Bickman L, Fisher WH, Aronson E. Impact of referral source and study applicants’ preference for randomly assigned service on research enrollment, service engagement, and evaluative outcomes. The American Journal of Psychiatry. 2005;162(4):781–787. doi: 10.1176/appi.ajp.162.4.781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Macias C, Rodican CF, Hargreaves WA, Jones DR, Barreira PJ, Wang Q. Supported employment outcomes of a randomized controlled trial of assertive community treatment and clubhouse models. Psychiatric Services. 2006;57(10):1406–1415. doi: 10.1176/appi.ps.57.10.1406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Magidson J. On models used to adjust for preexisting differences. In: Bickman L, editor. Research design. Vol. 2. Thousand Oaks, CA: Sage; 2000. [Google Scholar]
  63. Marcus SM. Assessing non-consent bias with parallel randomized and nonrandomized clinical trials. Journal of Clinical Epidemiology. 1997;50(7):823–828. doi: 10.1016/S0895-4356(97)00068-1. [DOI] [PubMed] [Google Scholar]
  64. Mark MM, Hofmann DA, Reichardt CS. Testing theories in theory-driven evaluations: Tests of moderation in all things. In: Chen H, Rossi PH, editors. Using theory to improve program and policy evaluations. New York: Greenwood Press; 1992. [Google Scholar]
  65. McGrew JH, Griss ME. Concurrent and predictive validity of two scales to assess the fidelity of implementation of supported employment. Psychiatric Rehabilitation Journal. 2005;29(1):41–47. doi: 10.2975/29.2005.41.47. [DOI] [PubMed] [Google Scholar]
  66. McGrew JH, Pescosolido BA, Wright E. Case managers’ perspectives on critical ingredients of Assertive Community Treatment and on its implementation. Psychiatric Services. 2003;54(3):370–376. doi: 10.1176/appi.ps.54.3.370. [DOI] [PubMed] [Google Scholar]
  67. McGurk S, Mueser K, Feldman K, Wolfe R, Pascaris A. Cognitive training for supported employment: 2–3 year outcomes of a randomized controlled trial. The American Journal of Psychiatry. 2007;164:437–441. doi: 10.1176/appi.ajp.164.3.437. [DOI] [PubMed] [Google Scholar]
  68. McHugo GJ, Bebout RR, Harris M, Cleghorn S, Herring G, Xie H, et al. A randomized controlled trial of integrated versus parallel housing services for homeless adults with severe mental illness. Schizophrenia Bulletin. 2004;30(4):969–982. doi: 10.1093/oxfordjournals.schbul.a007146. [DOI] [PubMed] [Google Scholar]
  69. McQuilken M, Zahniser JH, Novak J, Starks RD, Olmos A, Bond GR. The work project survey: Consumer perspectives on work. Journal of Vocational Rehabilitation. 2003;18:59–68. [Google Scholar]
  70. Meyer B, Pilkonis PA, Krupnick JL, Egan MK, Simmens SJ, Sotsky SM. Treatment expectancies, patient alliance and outcome: Further analyses from the national institute of mental health treatment of depression collaborative research program. Journal of Consulting and Clinical Psychology. 2002;70(4):1051–1055. doi: 10.1037/0022-006X.70.4.1051. [DOI] [PubMed] [Google Scholar]
  71. Mowbray CT, Collins M, Deborah B. Supported education for individuals with psychiatric disabilities: Long-term outcomes from an experimental study. Social Work Research. 1999;23(2):89–100. [Google Scholar]
  72. Mueser KT, Clark RE, Haines M, Drake RE, McHugo GJ, GR B, et al. The Hartford study of supported employment for persons with severe mental illness. Journal of Consulting and Clinical Psychology. 2004;72(3):479–490. doi: 10.1037/0022-006X.72.3.479. [DOI] [PubMed] [Google Scholar]
  73. Nichols AL, Maner JK. The good-subject effect: Investigating participant demand characteristics. The Journal of General Psychology. 2008;135(2):151–165. doi: 10.3200/GENP.135.2.151-166. [DOI] [PubMed] [Google Scholar]
  74. Noble JH, Honberg RS, Hall LL, Flynn LM. A legacy of failure: The inability of the federal-state vocational rehabilitation system to serve people with severe mental illness. Arlington: VA: National Alliance for the Mentally Ill; 1997. [Google Scholar]
  75. Quimby E, Drake R, Becker D. Ethnographic findings from the Washington, DC vocational services study. Psychiatric Rehabilitation Journal. 2001;24(4):368–374. doi: 10.1037/h0095068. [DOI] [PubMed] [Google Scholar]
  76. Rosenheck RA. Organizational process: A missing link between research and practice. Psychiatric Services. 2001;52(12):1607–1612. doi: 10.1176/appi.ps.52.12.1607. [DOI] [PubMed] [Google Scholar]
  77. Schwartz CE, Sprangers M. Methodological approaches for assessing response shift in longitudinal quality of life research. Social Science & Medicine. 1999;48:1531–1548. doi: 10.1016/S0277-9536(99)00047-7. [DOI] [PubMed] [Google Scholar]
  78. Sechrest L, Davis M, Stickle T, McKnight P. Understanding ‘method’ variance. In: Bickman L, editor. Research design. Thousand Oaks, CA: Sage; 2000. [Google Scholar]
  79. Secker J, Membrey H, Grove B, Seebohm P. Recovering from illness or recovering your life? Implications of clinical versus social models of recovery from mental health problems for employment support services. Disability & Society. 2002;17(4):403–418. doi: 10.1080/09687590220140340. [DOI] [Google Scholar]
  80. Shadish WR. Revisiting field experimentation: Field notes for the future. Psychological Methods. 2002;7(1):3–18. doi: 10.1037/1082-989X.7.1.3. [DOI] [PubMed] [Google Scholar]
  81. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin; 2002. [Google Scholar]
  82. Shadish WR, Matt GE, Navarro AM, Phillips G. The effects of psychological therapies under clinically representative conditions: A meta-analysis. Psychological Bulletin. 2000;126(4):512–529. doi: 10.1037/0033-2909.126.4.512. [DOI] [PubMed] [Google Scholar]
  83. Shapiro SL, Figueredo AJ, Caspi O, Schwartz GE, Bootzin RR, Lopez AM, et al. Going quasi: The premature disclosure effect in a randomized clinical trial. Journal of Behavioral Medicine. 2002;25(6):605–621. doi: 10.1023/A:1020693417427. [DOI] [PubMed] [Google Scholar]
  84. Sneed JR, Rutherford BR, Rindskopf D, Lane DT, Sackeim HA, Roose SP. Design makes a difference: A meta-analysis of antidepressant response rates in placebo-controlled versus comparator trials in late-life depression. The American Journal of Geriatric Psychiatry. 2008;16:65–73. doi: 10.1097/JGP.0b013e3181256b1d. [DOI] [PubMed] [Google Scholar]
  85. Sosin MR. Outcomes and sample selection: The case of a homelessness and substance abuse intervention. The British Journal of Mathematical and Statistical Psychology. 2002;55(1):63–92. doi: 10.1348/000711002159707. [DOI] [PubMed] [Google Scholar]
  86. Staines G, McKendrick K, Perlis T, Sacks S, DeLeon G. Sequential assignment and treatment as usual: Alternatives to standard experimental designs in field studies of treatment efficacy. Evaluation Review. 1999;23(1):47–76. doi: 10.1177/0193841X9902300103. [DOI] [PubMed] [Google Scholar]
  87. TenHave T, Coyne J, Salzer M, Katz I. Research to improve the quality of care for depression: Alternatives to the simple randomized clinical trial. General Hospital Psychiatry. 2003;25:115–123. doi: 10.1016/S0163-8343(02)00275-X. [DOI] [PubMed] [Google Scholar]
  88. Torgerson D, Klaber Moffett JA, Russell IT. Including patient preferences in randomized clinical trials. Journal of Health Services Research & Policy. 1996;1:194–197. doi: 10.1177/135581969600100403. [DOI] [PubMed] [Google Scholar]
  89. Torgerson D, Moffett JK. Patient Preference and Validity of Randomized Controlled Trials: Letter to the Editor. Journal of the American Medical Association. 2005;294(1):41. doi: 10.1001/jama.294.1.41-b. [DOI] [PubMed] [Google Scholar]
  90. Trist E, Sofer C. Exploration in group relations. Leicester: Leicester University Press; 1959. [Google Scholar]
  91. Twamley EW, Narvaez JM, Becker DR, Bartels SJ, Jeste DV. Supported employment for middle-aged and older people with schizophrenia. American Journal of Psychiatric Rehabilitation. 2008;11(1):76–89. doi: 10.1080/15487760701853326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Wahlbeck K, Tuunainen A, Ahokas A, Leucht S. Dropout rates in randomised antipsychotic drug trials. Psychopharmacology. 2001;155(3):230–233. doi: 10.1007/s002130100711. [DOI] [PubMed] [Google Scholar]
  93. West SG, Aiken LS, Todd M. Probing the effects of individual components in multiple component prevention programs. In: Revenson & T, D’Agostino RB, editors. Ecological research to promote social change: Methodological advances from community psychology. New York, NY: Kluwer; 2002. [DOI] [PubMed] [Google Scholar]
  94. West SG, Sagarin BJ. Participant selection and loss in randomized experiments. In: Bickman L, editor. Research design: Donald Campbell’s legacy. II. Thousand Oaks, CA: Sage; 2000. pp. 117–154. [Google Scholar]
  95. Wolf J, Coba C, Cirella M. Education as psychosocial rehabilitation: Supported education program partnerships with mental health and behavioral healthcare certificate programs. Psychiatric Rehabilitation Skills. 2001;5(3):455–476. [Google Scholar]
  96. Wong KK, Chiu R, Tang B, Mak D, Liu J, Chiu SN. A randomized controlled trial of a supported employment program for persons with long-term mental illness in Hong Kong. Psychiatric Services (Washington, DC) 2008;59(1):84–90. doi: 10.1176/appi.ps.59.1.84. [DOI] [PubMed] [Google Scholar]

RESOURCES