Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Nov 6.
Published in final edited form as: Adm Policy Ment Health. 2010 Sep;37(5):427–432. doi: 10.1007/s10488-009-0258-3

Viability of Using Employment Rates from Randomized Trials as Benchmarks for Supported Employment Program Performance

Paul B Gold 1, Cathaleene Macias 2, Paul J Barreira 3,4, Miriam Tepper 5,6,7, Jana Frey 8
PMCID: PMC4636006  NIHMSID: NIHMS734050  PMID: 20013044

Abstract

Cumulative employment rates published by randomized trials are based on each enrollee's pre-planned 18–24-months of study participation. By contrast, community programs typically report employment rates for clients active in services during a calendar quarter. Using data from three supported employment programs in randomized trials, we show that trial cumulative employment rates are about twice as large as quarterly employment rates for the same program. Therefore, we recommend that administrators, service networks, and mental health authorities begin to publish quarterly employment rates, and quarterly median earnings, to allow policymakers to set realistic performance expectations for supported employment programs.

Keywords: Supported employment, Performance contracting, Continuous quality assurance, Services, Research design

Introduction

Program administrators and departments of mental health need benchmark employment rates to set realistic standards for supported employment (SE) programs. Most community programs routinely calculate their employment rate as the ratio of employed clients to all clients active in services during a calendar quarter. By contrast, the employment rates published by SE research trials are typically calculated as the cumulative count of participants randomized to each service intervention who were competitively employed at any time during each participant's predetermined 18–24-months period of study participation, regardless of level of program engagement. The practical question is whether this procedural difference in how employment rates are calculated makes the cumulative competitive employment rates achieved by adults with severe mental illness in SE randomized trials, which range from 50 to 60% (Bond et al. 2008; Cook et al. 2005; Twamley et al. 2003), inappropriate benchmarks for evaluating community programs' quarterly rates of competitive employment.

To address this question, we conducted a secondary analysis of data collected in two randomized trials of three supported employment programs to illustrate the magnitude of differences between each program's cumulative 24-month `participant follow-up' employment rate and its calendar quarterly rate. We then compared each program's quarterly employment rates to the very few quarterly employment rates published by public agencies and naturalistic research studies to identify a range in quarterly employment rates that might serve as a starting point for setting performance benchmarks for community supported employment programs.

Methods

Supported Employment Research Sites

The South Carolina (Gold et al. 2006) and Massachusetts (Macias et al. 2006) sites of the eight-site employment intervention demonstration program (EIDP, Cook et al. 2005) provided data for the employment rate computations. The South Carolina site evaluated an individual placement and support program (IPS, Becker and Drake 2003) integrated with an assertive community treatment team. The Massachusetts project evaluated a program of assertive community treatment (PACT) modeled on the original vocationally-integrated PACT in Madison, Wisconsin (Frey 1994), and a fountain house-based clubhouse certified by the international center for clubhouse development (Propst 1992). All three programs achieved high fidelity to their service models and offered standard supported employment services (e.g., rapid job search, on-job training, entitlement counseling). All research procedures were approved by the institutional review boards of the Medical University of South Carolina and McLean Hospital in Massachusetts.

Employment Rate Calculations

Each of these three programs enrolled five to ten participants per month, beginning in Spring 1996 and ending in Spring 1998, with a 24-month participation window for each participant. For this reason, there was no calendar month during the four-year project period when all research participants were “in-study.” The first quarter (January–March) of 1998 had the largest proportion of participants “in-study,” so we calculated employment rates for this calendar quarter by defining the denominator as the number of research participants active in their assigned program, and the numerator as the number of these program-active participants who held a competitive job. We defined “active” as three or more service contacts during the quarter, with a threshold of one total hour of service. This threshold eliminated brief reach-out calls and home visits that were initiated by staff to try to engage service-resistant clients. Another definition of active status may be appropriate for studying other programs, but we chose this definition so that an assertive mobile program, like PACT, would not be disadvantaged in the calculation of employment rates by a large denominator that reflects this program's success in engaging and retaining assigned research participants.

To ensure that research findings would generalize to the specific service models under study, the two EIDP research trials adopted the eligibility criteria of the IPS, PACT, and clubhouse programs that served as focal interventions. As stipulated by IPS standards, the South Carolina project enrolled only individuals who were both interested in getting a job and able to work. As stipulated by both PACT and clubhouse standards, the Massachusetts project enrolled anyone with a severe mental illness, regardless of work interest or ability to work. During baseline interviews, about a third of the PACT and clubhouse study enrollees expressed disinterest or uncertainty about working, or were unable to work due to severe psychiatric symptoms, physical health limitations, or extenuating circumstances (e.g., hospitalization; custodial care of a relative). To take into account these eligibility differences between the IPS project and the PACT-clubhouse project, we first report each program's cumulative employment rates for only those individuals who expressed work interest at the time they enrolled in the research project. We next report cumulative employment rates for all research participants randomly assigned to each program, which in PACT and clubhouse, but not IPS, included some individuals with no interest in working. Quarterly employment rates were then calculated for all research participants active in each program during the calendar quarter under study, regardless of initial interest in work. The inclusion of all active clients, and a fiscal quarter timeframe, was based on a 24-state survey of mental health center performance contracting practices conducted in 1998 as a preliminary step in the identification of clubhouse performance standards (Macias et al. 2001).

We used the US Department of Labor definition of competitive employment: (a) any individually-held job, (b) located in a mainstream, integrated setting, which (c) paid minimum wage or higher (Department of Labor 1998). For all three programs, competitive employment included both permanent and temporary positions, as well as self-employment. For the clubhouse program, we counted transitional employment (TE) jobs because all of these jobs met the US Department of Labor criteria for competitive employment, and the work duration for TE in this research trial was comparable to the duration of other types of mainstream jobs (Macias et al. 2006). It was also essential to include TE jobs in the calculation of quarterly employment rates because most clubhouse performance contracts with funding agents require inclusion of TE jobs in rate calculations (Macias et al. 2001). We set no minimum job duration for counting a job as an SE outcome (i.e., a job could last only 1 day, or might start on the last day of the quarter and extend into the next quarter) because most randomized trials and naturalistic studies have not set work duration thresholds. All employment rates were calculated after the completion of the research projects so that these study outcomes were not available to the service programs while the projects were being conducted.

Results

The 24-month cumulative competitive employment rates for research participants expressing interest in working at the time of enrollment are 2–2.5 times as large as quarterly employment rates for the IPS, PACT, and clubhouse programs (Table 1: 64, 64, 59 versus 28, 30, 26%, respectively). However, these research study quarterly rates are 5–9 percentage points higher than quarterly employment rates reported by naturalistic studies, including the mean 21% rate (range: 15–35%) for any type of paid work reported for ten mental health centers in Vermont for the first quarter of 2000 (Pandiani and Carroll 2008), and the 21% rate for any paid work reported for a large sample of mental health service recipients in Washington State for the first quarter of 2001 (Hannah and Hall 2006).

Table 1.

Competitive employment rates for three supported employment programs: calculations based on varying timeframes and sample criteria

Program 24-Month participation window
Calendar quartera
Work interested participants onlyb
All participantsc
Participants active in services
n/N % N/N % n/N %
IPS + ACTd 42/66 64 e e 15/53 28
South Carolina
Vocational PACTf 41/64 64 49/86 57 20/67 30
Massachusetts
Certified clubhouseg 34/58 59 43/89 48 18/69 26
Massachusetts
a

1st quarter 1998; all participants with at least three service contacts and one total hour of service

b

Work interested = participants stating an interest in work at time of study enrollment

c

All participants = all randomized participants regardless of work interest at time of enrollment

d

Individual Placement and Support (IPS) program of supported employment integrated with an assertive community treatment program. South Carolina Project: Employment Intervention Demonstration Program

e

– = IPS program exclusion criterion of individuals expressing no work interest

f

Vocationally-integrated Program of Assertive Community Treatment. Massachusetts project: Employment Intervention Demonstration Program

g

Clubhouse program certified by the International Center for Clubhouse Development. Massachusetts project: Employment Intervention Demonstration Program

The IPS program's 28% quarterly competitive employment rate falls in the middle of the 23–35% range of quarterly rates reported for enrollees in eight high-fidelity New Hampshire IPS programs during the last quarter of 1995 (Drake et al. 1998). However, both the PACT and the clubhouse programs in our research study had lower quarterly employment rates compared to high fidelity PACT and clubhouse community programs. The 26% quarterly competitive employment rate for the subset of clubhouse members who took part in the research trial is lower than the 39% quarterly rate for this same program's entire active membership, as well as lower than the average 37% quarterly rate reported for a sample of US certified clubhouses (N = 71) in 1998 (Macias et al. 2001). There are no published quarterly employment rates for other vocationally-integrated PACT programs, but our PACT research program's 30% quarterly competitive employment rate is less than the 37% rate for this same first quarter of 1998 for the original vocationally-integrated PACT program in Madison, Wisconsin (Dr. Jana Frey, program director, personal communication, 2009).

Discussion

In these randomized trials, cumulative employment rates (i.e. whether or not a participant worked competitively during 24-month research windows) were approximately twice as large as quarterly employment rates (64, 64, 59 versus 28, 30, 26%, respectively), suggesting that the 50–60% employment rates cited by review articles are not attainable goals for most community programs' real-time quarterly performance. An average quarterly competitive employment rate of 25–35% seems a more reasonable provisional benchmark for SE program performance, but this assumption requires verification and greater specificity based on particular service models' eligibility criteria (e.g., whether a program screens for work interest).

Benchmarks for Particular Types of Programs

A range of reasonably attainable employment rates (e.g., 25–35%) may be useful to program administrators, but model-specific SE standards are essential to the design of performance contracts. Specialized IPS employment programs that accept only individuals able and willing to work may be expected to adopt higher quarterly performance benchmarks than multi-service programs, like PACT and clubhouse, that accept clients too ill or disabled to work and those with no initial interest in employment. Generic benchmarks that restrict the calculation of employment rates to only those clients who expressed work interest at service enrollment are not practical for quarterly reporting purposes because work interest is a time-varying attribute that requires repeated measurement. That is, some clients who are reluctant to work at one point in time may change their minds when they feel better, gain confidence, or learn more about how earned income affects their social security benefits. In our randomized trial of PACT and clubhouse programs (Macias et al. 2006), a fourth of the project enrollees who had expressed no interest in employment at enrollment eventually changed their minds and worked competitive jobs for a median duration of 6 months. If our PACT and clubhouse quarterly employment rates (Table 1) were based only on participants who had been interested in work at study enrollment, eight employed clients would have been ignored. Model-specific employment rates that count every employed client as a ratio of all active clients offer more realistic assessments of a multi-service program's point-in-time SE performance in routine public settings.

It is also essential to operationally define “active service status” in keeping with program characteristics. For all three SE programs in this study, we defined the denominator of the employment rate as the number of research participants who were “actively receiving services”—a minimum of three provider contacts adding up to at least one hour of service. In practice, however, each of these three service models, IPS, PACT, and certified clubhouse, has its own expectations for the optimal frequency and intensity of service contact over a given period of time. Because the three service models' operating standards differ, a client receiving a particular level of service might be considered active in one program, but inactive in another. For example, certified clubhouses allow members to decide how and when to use the services they offer, so even a small number of brief service contacts during a given quarter might qualify a clubhouse member as “active.” In contrast, programs strongly emphasizing assertive outreach, such as IPS and PACT, set relatively high thresholds for minimum client contact. Therefore, if one used a clubhouse definition of “active” for counting the number of clients active in PACT or IPS services, then counts of active clients for the latter two programs might be inflated, resulting in relatively large denominators and relatively low employment rates. In short, using a single definition of “active status” may yield employment rates that are no more comparable than rates calculated in keeping with each service model's operating standards, and definitions of “active status” based on program standards are far more meaningful to administrators and service planners.

Quarterly Employment Rates for Community Programs Versus Interventions in Randomized Trials

It would be interesting to know what factors account for the lower quarterly employment rates reported for our research trial PACT and clubhouse programs versus quarterly rates recorded for these same service models outside randomized trials. Hannah and Hall (2006) note that a key difference between naturalistic studies and SE trials is that research trials usually exclude individuals who are already employed at the time of eligibility screening, whereas a naturalistic study would include these same individuals, some of whom may be able to work long-term without assistance because they are better educated or higher functioning. However, the exclusion of employed individuals by research trials cannot completely account for the lower quarterly employment rates for the PACT and clubhouse programs in our research trial because the calendar quarter we selected for calculating quarterly employment rates occurred at a point in the study when many participants had been employed for months or years. Neither would differences in how employment was measured provide a credible explanation because the community programs we used for comparison purposes (71 certified clubhouses; PACT in Madison, WI) counted only jobs that met the U.S. Department of Labor definition of competitive employment.

A plausible explanation for the lower quarterly rates of the PACT and clubhouse programs in our Massachusetts research trial, compared to PACT and clubhouse community programs, is that random assignment itself may have lowered the work motivation of some study participants. In a previous report of this trial (Macias et al. 2005), we demonstrated that participants who expressed a clear preference for one of these programs over the other, but were randomly assigned to the less preferred program, were not as likely as other participants to pursue SE services or work a competitive job. The negative impact of being assigned to a non-preferred program could also account for the higher 39% quarterly employment rate for the full membership of this Massachusetts clubhouse, relative to the 26% quarterly employment rate achieved by the subset of clubhouse members who were in the research trial, because those who were randomly assigned to the clubhouse in spite of their preference for PACT represented a smaller proportion of the full membership than the research subsample. These findings suggest that SE programs that participate in randomized trials will tend to have lower than usual employment rates, unless the comparison condition is substantially less appealing than the study's SE program (Macias et al. 2009).

Making Employment Outcomes More Meaningful

Reports and publications would be more useful if the vague term `employment rate' were replaced with more precise language, e.g., `percent of research participants randomly assigned to each experimental condition who were employed during their time in the study' or `percent of active program participants who were employed during a calendar quarter.' We concur with Browne et al. (2009) that employment rates are also more meaningful when accompanied by other vital information, such as pay rate, hours worked per week, and work duration, which together determine earnings. For instance, it would be helpful for administrators and funding agents to know whether programs that encourage clients to accept less demanding jobs (e.g., less than 10 h per week) have higher quarterly employment rates because their clients are able to stay employed longer. Also, routinely reporting “mean days to commencement of first job” (Browne et al. 2009) could set SE benchmarks for “rapid job placement” and provide insights into whether a strong reliance on readily-available work options (e.g., temporary work agencies, seasonal jobs) will tend to increase the variability of quarterly employment rates over the course of a calendar year.

Who Should set SE Benchmarks?

It would be advantageous if completed randomized trials published quarterly competitive employment rates for each SE intervention in their project whenever this is possible, so that the public can see that a program's quarterly employment rates are rarely comparable to participants' “time-in-study” rates. However, it seems most feasible for state or provincial and national governments to take the lead in setting quarterly performance benchmarks for supported employment, since they can provide data on employment services and outcomes that span years, or even decades. Benchmarks set by service model representatives, in collaboration with government agencies, may be even more useful because such benchmarks could be recalibrated to ensure fair performance appraisals whenever systemic changes (e.g., funding allocations, insurance reimbursement policies) affect one type of program more than another. At the present time, only the IPS model (eight programs in New Hampshire; Drake et al. 1998) and clubhouse model (71 programs in 24 states; Macias et al. 2001) have published quarterly employment rates for samples of high-fidelity community programs. These rates reflect a strong period in the US economy, so there is an immediate need to collect and publish current quarterly employment rates for community SE programs in various geographic locations.

Conclusion

Our primary intent has been to clarify the distinction between the cumulative employment rates reported by randomized trials that are based on participants' time-in-study versus community programs' reports of the proportion of active clients employed during a specific calendar quarter. Researchers need one set of employment benchmarks to set outcome expectations for randomized trials, while community programs need another set of employment benchmarks for periodic quality assurance monitoring. It is essential that the general public and policymakers know how employment rates are calculated, and which figures are comparable.

As a secondary goal, we explored the range in quarterly employment rates across a variety of SE programs to gauge the feasibility of setting general performance standards. We were handicapped in this task by the sparseness of published data, but the narrow range in available quarterly rates suggests that realistic benchmarks can be set for specific service models based on client eligibility criteria. We urge state/province and local authorities to routinely monitor and publish their own SE programs' quarterly employment rates (Bickman 2008), along with program characteristics, job descriptions, and regional economic indicators needed for the design of performance contracts.

Acknowledgments

This research was supported by grants from the National Institute of Mental Health to the first and second authors (MH01903 and MH62628, respectively). We thank Suzanne Senn-Burke of PACT, Inc. at Mendota Mental Health Center in Madison, WI for her provision of quarterly employment data for the original vocationally-integrated PACT program.

Footnotes

Disclosures None for any author.

References

  1. Becker DR, Drake RE. A working life for people with severe mental illness. Oxford Press; New York: 2003. [Google Scholar]
  2. Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of American Academy of Child & Adolescent Psychiatry. 2008;47(10):1114–1119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bond GR, Drake R, Becker D. An update on randomized controlled trials of evidence-based supported employment. Psychiatric Rehabilitation Journal. 2008;31(4):280–290. doi: 10.2975/31.4.2008.280.290. [DOI] [PubMed] [Google Scholar]
  4. Browne D, Stephenson A, Wright J, Waghorn G. Developing high performing employment services for people with mental illness. International Journal of Therapy and Rehabilitation. 2009;16(9):502–511. [Google Scholar]
  5. Cook JA, Leff HS, Blyler CR, Gold PB, Goldberg RW, Mueser KT, et al. Results of a multisite randomized trial of supported employment interventions for individuals with severe mental illness. Archives of General Psychiatry. 2005;62:505–512. doi: 10.1001/archpsyc.62.5.505. [DOI] [PubMed] [Google Scholar]
  6. Department of Labor . Job training program act, Disability grant program funded under title III, Section 323, and Title IV, Part D, Section 452. Vol. 63. Federal Register; 1998. [Google Scholar]
  7. Drake RE, Fox TS, Leather PK, Becker DR, Musumeci JS, Ingram WF, et al. Regional variation in competitive employment for persons with severe mental illness. Administration and Policy In Mental Health. 1998;25(3):493–504. [Google Scholar]
  8. Frey JL. Long term support: The critical element to sustaining competitive employment: Where do we begin? Psychosocial Rehabilitation Journal. 1994;17(3):127–133. [Google Scholar]
  9. Gold PB, Meisler N, Santos AB, Carnemolla MA, Williams OH, Keleher J. Randomized trial of supported employment integrated with assertive community treatment for rural adults with severe mental illness. Schizophrenia Bulletin. 2006;32(2):378–395. doi: 10.1093/schbul/sbi056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Hannah G, Hall J. Employment and mental health service utilization in Washington State. Journal of Behavioral Health Services & Research. 2006;33(3):287–303. doi: 10.1007/s11414-006-9026-2. [DOI] [PubMed] [Google Scholar]
  11. Macias C, Barreira P, Alden M, Boyd J. The ICCD Benchmarks for clubhouses: A practical approach to quality improvement in psychiatric rehabilitation. Psychiatric Services. 2001;52(2):207–213. doi: 10.1176/appi.ps.52.2.207. [DOI] [PubMed] [Google Scholar]
  12. Macias C, Barreira P, Hargreaves W, Bickman L, Fisher WH, Aronson E. Impact of referral source and study applicants' preference for randomly assigned service on research enrollment, service engagement, and evaluative outcomes. American Journal of Psychiatry. 2005;162(4):781–787. doi: 10.1176/appi.ajp.162.4.781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Macias C, Gold PB, Hargreaves WA, Aronson E, Bickman L, Barreira PJ, et al. Preference in random assignment: Implications for the interpretation of randomized trials. Administration and Policy in Mental Health & Mental Health Services Research. 2009;36(5):331–342. doi: 10.1007/s10488-009-0224-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Macias C, Rodican CF, Hargreaves WA, Jones DR, Barreira PJ, Wang Q. Supported employment outcomes of a randomized controlled trial of assertive community treatment and clubhouse models. Psychiatric Services. 2006;57(10):1406–1415. doi: 10.1176/appi.ps.57.10.1406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Pandiani JA, Carroll B. Vermont mental health performance indicator project: Employment of CRT clients for FY2000 through FY2008. 2008:6. Available at: http://healthvermont.gov/mh/docs/pips/2008/documents/Pip121908.pdf.
  16. Propst R. Standards for clubhouse programs: Why and how they were developed. Psychosocial Rehabilitation Journal. 1992;16(2):25–30. [Google Scholar]
  17. Twamley E, Jeste DV, Lehman A. Vocational rehabilitation in schizophrenia and other psychotic disorders: A literature review and meta-analysis of randomized controlled trials. Journal of Nervous and Mental Disease. 2003;191(8):515–523. doi: 10.1097/01.nmd.0000082213.42509.69. [DOI] [PubMed] [Google Scholar]

RESOURCES