Abstract
Our study is the first-ever initiative to merge administrative databases in Massachusetts to evaluate an important public mental health program. It examines post-incarceration outcomes of adults with serious mental illness (SMI) enrolled in the Massachusetts Department of Mental Health (DMH) Forensic Transition Team (FTT) program. The program began in 1998 with the goal of transitioning offenders with SMI released from state and local correctional facilities utilizing a core set of transition activities. In this study we evaluate the program’s effectiveness using merged administrative data from various state agencies for the years 2007 – 2011, comparing FTT clients to released prisoners who, despite having serious mental health disorders, did not meet the criterion for DMH services. By systematically describing our original study design and the barriers we encountered, this report will inform future efforts to evaluate public programs using merged administrative databases and electronic health records.
Keywords: Serious mental illness, released prisoners, re-entry program effectiveness, administrative data
Introduction
Along with the staggering and growing approximate number of adults incarcerated (nearly 2 million) and corresponding price tag (nearly 70 billion dollars per year), a Bureau of Justice Administration study reported that half of all prisoners have some form of mental disorder (James and Glaze 2006). Among prisoners, the rate of mental disorder is four to five times the rate of the general population (Rice and Harris 1997), and approximately 16% of all individuals incarcerated in state prisons (16% of all males and 24% of all females) report any single psychiatric symptom (Ditton 1999). Ten percent (10%) of male and 18% of female prisoners are estimated to have an Axis I major mental disorder of thought or mood (Pinta 2001). Recently Steadman and colleagues (2009) examined prevalence rates of serious mental illness (SMI) among jail inmates and found a nearly 15% rate among males and twice that (30%) among females. Although jails are quite different than prisons where inmates serve out correctional sentences, it is important to note that the rates only include serious mental illness or SMI (SMI; defined generally as a primary major Axis I mood or psychotic disorder or other serious mental condition impairing functioning). These rates suggest correctional and community treatment imperatives for individuals with mental disorders involved in the criminal justice system (Steadman et al., 2009). Additionally, more than three quarters (83%) of offenders with mental disorders are dually diagnosed with co-occurring substance use disorders (BJS 2001). Among offenders with SMI in Massachusetts, nearly two-thirds or 70% have substance abuse histories (Author 2004 a,b). Individuals with co-occurring substance use problems are at an increased likelihood to be incarcerated due to exacerbating multiple pathways into the criminal justice system (Swartz and Lurigio 2007). Evidence also suggests that prisoners with SMI are more likely to have had previous incarcerations (Baillargeon et al. 2009).
The vast majority of prisoners is released and return to the community with complicated profiles and service needs. The risk factors for incarceration (unemployment, substance abuse, mental illness, poverty) are also the risk factors for poor health and public health outcomes. Individuals with SMI are typically underinsured and have limited coping repertoires that can have “spill-over” effects in the community when they are released from correctional custody (Massoglia and Schnittker 2009; Fisher, Silver & Wolff, 2006). These effects include broad difficulties in reintegration and avoiding subsequent criminality and corresponding social costs post-release. Ex-inmates with SMI have truncated resources and social networks leading to more specific problems including securing housing and appropriate medical, psychiatric and substance abuse treatment (Baillargeon 2009).
While there is widespread recognition of the fact that “something must be done” to ease the re-entry process of individuals with mental disorders, a design for the most efficacious and utilitarian approach remains elusive. The costs of incarceration are increasing and there has been little progress in reducing recidivism among persons with SMI (Baillargeon et al., 2009). Discharge planning services for released prisoners with SMI are deficient (Baillargeon et al., 2009; Draine and Herman 2007). Most are provided by criminal justice agencies and lack attention to the therapeutic aims of the public mental health and substance abuse treatment systems (Wilson and Draine 2006; Draine and Herman 2007). In fact, there are no comprehensive evidence-based interventions addressing the post release transition needs of prisoners with SMI (Draine and Herman 2007). Programs such as FACT (Forensic Assertive Community Treatment) and Critical Time Intervention (CTI), show some promise, but are extremely costly, labor intensive and limited in range (see Morrissey and Meyer 2005; Draine and Herman 2007). There is no evidence that intensive services are necessary or cost-effective for the increasingly diverse population of individuals with mental disorders released from prison.
In Massachusetts, we have embarked on an evaluation of a state-wide re-entry program for people with SMI exiting corrections. Our plan includes multiple state and county level agencies. Theoretically each agency has its own organizational role, needs, expectations and norms. Our goal was to establish a combined dataset, since offenders with SMI have significant involvement with multiple agencies, “harmonizing” existing administrative databases to evaluate the Department of Mental Health’s (DMH) Forensic Transition Team (FTT), a case coordination based re-entry program that has been in existence for over 10 years. Although there have been descriptive studies documenting favorable short-term outcomes (Author 2003, 2001, 1999), a rigorous empirical analysis of the FTT program using matched controls has not been done. This project is the first-ever initiative under a co-operative agreement to merge data from multiple state agencies in Massachusetts. The objectives of this report are: (1) to describe our original methodology and (2) to delineate the barriers we encountered and lessons learned in operationalizing our evaluation plan.
Background and Evaluation Setting
While much movement is afoot at the intersection of the criminal justice and mental health systems with respect to jail diversion, re-entry is an equally essential component of the continuum, and one where gaps in continuity of care can abandon individuals with limited resources and coping mechanisms following release. Diversion programs and mental health courts are efforts at the “front end” of the criminal justice continuum designed to maintain continuity of care and manage populations that might be better served in treatment programs. Re-entry programs for individuals released from longer term incarceration are distinct “back end” programs that support individuals who were convicted by the criminal justice system and served sentences in correctional custody. These programs represent an attempt to address the jarring transition from total institutions to the open community. Failure in this domain can result in returns to the correctional system and significant costs to public services, public safety, and quality of life for the individual. Although there are programs using retrospective administrative data to examine the impact of re-entry programs, ours is the first that we know of to attempt to do so with a statewide program for individuals with SMI. For example, the Maryland Re-entry Partnership Initiative (REP) used a quasi-experimental design to evaluate re-arrest and reconviction rates and cost and benefits of the REP program compared to a contemporaneous cohort of prisoners released between 2001 and 2005, but did not examine individuals with SMI separately (Roman et al. 2007).
In Massachusetts, approximately 20,000 prisoners return home each year following a spell of incarceration. The Department of Correction estimates that approximately 10% to 20% (2,000 to 4,000) have a mental disorder. These range in severity in terms of impact on functioning, up to and including SMI. Since 1998, the Department of Mental Health (DMH) Forensic Transition Team (FTT) program has worked to identify individuals with significant mental health conditions (generally Axis I disorders that constitute SMI) being released from incarceration and provides re-entry and transition coordination.
The FTT is being empirically evaluated to potentially broaden the range of evidence-based re-entry services that minimize community costs and maximize public investment for improved outcomes for individuals with SMI (Wolff 2005). In addition to being statewide, the FTT is unique because it is voluntary rather than mandated as a conditional of correctional release via probation and/or parole and has a community treatment vision rather than the correctional goals of care and custody. FTT Coordinators from each region/catchment area of the state identify and prepare DMH service authorized SMI prisoners for the transition from county houses of correction or state prisons to community mental health services. Program goals include reducing recidivism and enhancing community safety via three core functions: (1) client “meet and greet” prior to release; (2) tracking and documentation; and (3) advocacy. FTT coordinators meet and greet all pending releases who correctional facilities identify as in need of DMH level of services to begin the DMH service authorization process. Eligibility for DMH services is based on clinical diagnosis, need (level of dysfunction), and history (duration), including chronic disability leading to impaired functioning for a year or longer. After authorization for DMH services is determined, FTT coordinators begin tracking and documentation, gathering information on clients who are within three months of release. This information enables FTT coordinators to create release plans with community providers and to assess social service benefits and needs. As needed, the FTT coordinators advocate for clients by attending meetings, sharing information, and tracking clients’ linkages and progress for three months post release. In addition to FTT coordination, FTT clients being released from correctional facilities receive standard correctional re-entry programming (see below).
By systematically describing our original study design and the barriers we encountered, this report has the potential to create a blueprint for “best practices” in acquiring and merging components of administrative databases. Our goals are two-fold: (1) to delineate obstacles and pitfalls when relying on administrative data and electronic health records; and (2) to inform future evaluations requiring administrative data and agency collaborations so that the programs that serve vulnerable populations (such as those with SMI) can be rigorously evaluated using existing data without a huge burden to the clients they serve.
Design
Our ultimate evaluation aims are: (1) to compare post-incarceration outcomes (re-arrest, re-incarceration and problematic substance use) of FTT clients with other prisoners receiving correctional based mental health services at the time of release who were ineligible for the FTT program and compare costs and benefits of the FTT program; and (2) to use multivariate analytic techniques to determine factors that may affect disparities in post-incarceration outcomes by demographic factors, housing status, substance abuse and age of participants in the FTT and comparison group after controlling for most recent governing offense and geographic region. The basic design of the study is longitudinal and retrospective and involves merging existing secondary data for the years 2007 –2011. Figure 1 below delineates the study design.
Figure 1.
Basic Research Design Time Line July 2007-September 2011
The Dataset
The secondary data merged for the study dataset comes from state agencies with which FTT clients have significant contact. These agencies include the Department of Mental Health (DMH), Department of Correction (DOC), two large county Houses of Correction (HOCs), Department of Public Health - Bureau of Substance Abuse Services (DPH-BSAS) providing substance abuse service information, and publicly available data from the Board of Probation (BOP) Criminal Offender Record Information (CORI) system, the state’s official repository of court arraignment data.
With an inter-agency memorandum of understanding (MOU), we collaboratively created a study variable list (see Appendix A) and compiled our dataset. The creation of the MOU across the study and participating state agencies was daunting, but eased, at least in part, by council from the Department of Correction shepherding the process in collaboration with the principal investigator. The MOU functioned as an interchangeable agreement that provided a statement of cooperation as well as formal recognition of all the partners. Although it does not offer the same legal assurances or protections of a contract, it clarifies roles and responsibilities of each agency and the university. Components of the MOU are fairly standardized, as it is a legal document with signatories from each agency (Table 1). The MOU resulted in our ability to create the dataset including both case/control and outcome data from several state agencies for analysis.
Table 1.
MOU Components
| Component | Description |
|---|---|
| Parties | All partnering agencies and institutions |
| Purpose | Cooperative effort to achieve data sharing and legal compliance |
| Duration/Termination | In effect until project completion |
| Terms and Conditions | Applicable laws |
| Effective Date | Upon signatures |
| Modifications/Amendments | Opportunity for revision in writing |
| Severability | Consequences if parts of MOU invalid |
| Notice to all Parties | All partnering agencies and institutions point people |
| Execution | Signatures of leads of all partnering agencies and institutions |
| Project Summary | Methods and steps |
| Approvals | All IRB letters and proposals |
| Investigators | Listed investigators and biosketches |
| Letters of Support | From original grant application |
The case/comparison dataset includes all open mental health cases released from 16 (all) state prisons and 2 county houses of correction between July 2007 and July 2009. FTT transition “cases” were identified within this dataset by databases at the DMH. Individuals with mental disorders who re-entered the community after incarceration, but were not eligible for FTT program form the “comparison” group. The analytic dataset includes the case/comparison dataset and outcome data. Outcome data were entered quarterly on substance abuse encounter data captured after the individual’s most recent release (DPH-BSAS). Rearrest, reconviction and reincarceration information was collected from the BOP and the DOC’s Inmate Management System through June 2011.
Outcome data was compiled by the relevant agency for each data field, using client address, date of birth, gender and name and returned to the research technician located at the DOC. For example, detoxification admissions are compiled for all individuals in the dataset by DPH-BSAS quarterly. Prior to returning the dataset to the DOC, DPH-BSAS strips the analytic outcome dataset of identifiers leaving only client IDs to ensure confidentiality of the private health information of PHI. Similarly, rearrest and reincarceration are collected quarterly by the research technician located at the DOC. Each quarter the dataset and outcomes are merged, stripped of identifiers and sent to the research team for analysis.1
Comparative Economic Analysis
While the comparative economic portion of this analysis takes a societal perspective, the costs and benefits are restricted to outcomes related to recidivism and substance abuse outcomes. The goal of our analyses is to determine whether expenditures on the FTT program will result in enough savings, in terms of reduced need for substance abuse treatment, re-arrest and reconviction to offset the FTT’s costs and perhaps even realize some savings. We use the term “comparative economics” because our analysis is based on averted costs. However, re-conviction to the criminal justice system is estimated as a catastrophic cost due to the expense of correctional custodial versus the potential benefits of community treatment. We plan to derive the reduction in the proportion of FTT clients who were re-arrested, reduction in number of arrests among those re-arrested, reduction in the proportion with serious offenses, and time from re-entry to offense relative to the controls. Thus, costs avoided due to reduction in crimes could identify a benefit derived from the FTT program. We plan to conduct a similar analysis for detoxification admissions and substance abuse treatment.
Components of costs include resource costs and program costs. The resource costs are those incurred due to resource use and associated with the most recent offense. These costs cover costs of detoxification days as well as cost accrued related to the criminal justice system such as jail, prison, probation, parole, and arrests and convictions. For each offense, we will use marginal operating costs rather than averages to estimate costs of the resources, since some of the costs in the criminal justice system, such as “nights incarcerated,” are fixed. Program costs include those associated with implementing the FTT program and are derived from DMH. There are, of course, significant social costs associated with rearrest, including costs to multiple systems, to victims of crime, and to the perpetrators themselves. Capturing these costs was, however, beyond the scope of this project.
Design Barriers
Similar to collecting primary data, utilizing secondary data has inherent limitations, including data quality and variable viability, that is, whether the agencies themselves collect the targeted variable in a way that is valid for evaluation. Below we delineate the barriers we encountered in constructing the case control data set, merging the outcome data for the analytic dataset, and collecting cost information.
FTT Case Group
Originally we based our estimates of the case group on the comprehensive FTT data set with 10 years of FTT transition data from all DOC and HOC facilities across the state. We selected the study’s specific time frame to be able to best project outcome data onto the retrospective dataset. Additionally, to avoid the complexity of expanding the MOU to all 17 HOCs, each of which has its own administration led by a county sheriff, we selected the two busiest county HOCs and all DOC facilities, since the DOC has a centralized research unit that acts as a repository for data on all prisoners statewide. Interestingly, even with our retrospective quasi experimental design, our project shed light on the issue of varying expectations regarding data entry by region, agency, and institution.
For the FTT, for instance, clinicians were expected to enter standard data for each FTT case. However, the FTT database was developed early on with the development of the program and had grown cumbersome due to a variety of issues that arose over the years. Because data entry was not highlighted as a primary function of the FTT coordinators, the expectation for its completion was not consistently monitored or prioritized as a mandated function (i.e., built into the clinicians expected duties) across all sites. In some scenarios this resulted in patterns of missing data. Additionally, the FTT database, by design, did not collect clients’ last four digits of their social security numbers. Therefore, to make automated matches, we had to rely largely on client name, date of birth, and address. In many cases, matches could not be made due to spelling or prefix issues related to names. And although we created lists of potential aliases to match on – a necessary step when working with criminal justice data -- the lack of concordance among names and aliases made identification matches extremely difficult.
Prior to the initiation of this study there had been within DMH a growing recognition of the complexities and challenges for staff to enter and maintain the FTT database that had been designed at the outset of the program. During the course of the study, a transition was made to a larger, more robust database that was utilized in other parts of the DMH system. However, this database did not have a field indicating FTT services until Fall 2010. Thus, moving forward, the obstacles we encountered in obtaining FTT data may no longer be at issue given this system upgrade. However, given the study design and period at hand these challenges including data cleaning and monitoring being a lower priority than client clinical care and coordination, hampered our efforts. During the course of the study DMH, importantly, confronted the difficulties with the database infrastructure, resource issues, and staffing shortages.
Department of Correction (DOC) Open Mental Health Comparison Group
The DOC was tasked with providing data on all “open mental health cases,” (i.e. persons who had received mental health services while incarcerated or were at least known to administrators as carrying a diagnosis of SMI) released from all DOC facilities from July 2007 through July 2009. Individuals identified for this evaluation were released from DOC facilities, having served sentences averaging four years or longer for felony offenses. Prior to release, DOC prisoners received a standard set of re-entry services, including re-entry workshops and presentations. The data on these individuals included measures of socio-demographic, mental health, and criminal history/governing offense variables. Interestingly, cutting this dataset was fairly straightforward. However, when we ran frequencies across the variables we found an extraordinary number of cases with missing diagnoses. Although we understood at the outset that DOC contracts out all health and mental health services to vendors, and that DOC changed mental health service providers in mid-2007, we did not expect data entry variation by DOC facility. The data also revealed patterns in missing diagnostic data from particular institutions; in one case, hard copies of diagnostic data were shredded after entry into the previous vender’s database. Ultimately, we hired the staff from the new vendor to attempt to identify the missing diagnoses. While the new vendor is currently in the process of creating automated data entry forms with drop down menus, historically, contracted vendor clinicians within facilities would record inmate diagnosis from the range of the entire Diagnostic Statistic Manual and whatever was recorded was entered into the vendors database as a diagnosis. Thus, we ended up having over 50 pages of diagnostic codes and a substantial recoding job.
Two Houses of Correction (HOC) Open Mental Health Comparison Group
Similar to the DOC database, two county houses of correction (HOCs) provided data on all open mental health cases released from July 2007 through July 2009. We have since learned that by limiting the number of counties to two, we substantially reduced study complexities, but also, ultimately, our FTT case group size. This limitation was exemplified by the fact that each county, and FTT staff in the county, had their own data entry procedures and issues. Therefore, although the selected HOCs had the largest number of inmates, having a large inmate population did not necessarily translate into standardized data entry procedures. In fact, we learned that FTT data entry for one of the HOCs did not regularly occur, in part because in that particular region many of the reentry efforts for DMH eligible clients were undertaken by routine DMH case management services rather than FTT staff for a variety of reasons and in part because, similar to the DOC data experience with diagnoses, that region’s staff kept data via a different mechanism. Therefore, for that HOC we were unable to make any FTT case matches using the DMH database and had to rely on other methods (described below). Here again, similar to the DOC group, the HOC open mental health group subsumes FTT HOC clients who are a part of the FTT case group, the remaining HOC cases function as the comparison group as they were identified as having a mental disorder while incarcerated, but were not found eligible for DMH/FTT services. Individuals released from HOC facilities were sentenced for misdemeanors and served sentences averaging 2.5 years or less. Prior to release, HOC inmates are informed of a variety of community programs based on their individual needs (HIV, veteran status, and gender) to ease their transition back to the community. Ultimately, due to the narrow criteria for qualifying for FTT services and the issues of case finding mentioned above, the sample size of the comparison group in each county is much larger than the FTT case group.
Case/Control Data Merge-The “Identifier” Problem
Originally the research team planned on merging the case/comparison data, but because the data included protected health information (PHI) from corrections and DMH (FTT case group being DMH clients) we were unable to do so. The “identifier” problem pervasive to this project derives from the fact that multiple agencies use different identifiers and have different standards for privacy. Criminal justice agencies, such as the DOC enforce security with respect to identified data – i.e., they do not “give it out freely,” but have specific standards in place for sharing it with researchers. These data are not afforded the level of protection given to data from DMH and DPH, each of whose data would be considered under HIPPA to be PHI. Indeed, even if a minimally-identified DMH data set is to be shared with another state agency, the mere fact that the data originate from DMH is ipso facto evidence that the persons included in the data set have serious psychiatric illnesses. This leads to a second problem, which is that is that PHI status in any given data set is conferred on any other data set with which it is merged. For example, the fact that an individual has been in the custody of the DOC is not PHI. However, if that data set includes open mental health status or is merged with DMH data, the elements within the resulting data set become PHI. Thus, if a full DOC release date (i.e., day, month and year) of a DMH client is included in the data set, it is presumed that the specificity of that date may be sufficient to identify an individual, in much the same way that a full date of health service use would be considered PHI under HIPPA. These restrictions had a direct effect on our analytic work. In examining FTT – non FTT differences in rearrest, for example, we were forced to use month instead of day of rearrest, which resulted in a degree of imprecision.
Thus, we created a process that MOU partners agreed to that also adhered to Institutional Review Board regulations for each agency that required passing a dataset including all releases from the DOC and HOCs from July 2007 through July 2009 to DMH for FTT case identification. This dataset included 6,276 disaggregated cases due to the fact that individuals could have been released more than once over the study period. Instead of trying to make matches on 2,263 open mental health cases, the analyst from DMH had 6,276 cases to filter through the DMH/FTT database to find hits between July 2007 and July 2009. Once this process was complete the dataset was returned to the DOC research unit where the research assistant expunged all the non-open mental health cases. This included only 1 case that was identified as receiving FTT services but that was not an open mental health case while incarcerated. A final step in this was the creation of client study identifiers (IDs) for the case/comparison dataset.
Outcome Data
Once the case/comparison dataset was constructed we began the process of merging outcome measures for the analytic dataset including (a) re-arrest and reincarceration data from the BOPs, and (b) substance abuse/relapse data from DPH-BSAS.2
Data on Re-arrest
In Massachusetts, all court arraignment data are maintained in the Criminal Offender Record Information (CORI) system, maintained by the Board of Probation (BOP). These data include demographics, charge data, arraignment dates and sentence disposition. Researchers can access these data by making a formal request to the formerly entitled Criminal History System Board (CHSB) currently the Department of Criminal Justice Information Services (CJIS). Although the research team was trained to access this data by the CHSB we were ultimately unable to use CHSB records for the study because, again, our study includes protected health information (PHI). To match CORI information to the case control dataset prospectively names and other identifiers must be used, and, because of the mental health focus of the study, clients’ names become protected health information. To overcome this barrier we placed a trained/CORI cleared research assistant at the DOC. The DOC uses current Board of Probation records (BOPs) and their Inmate Management System to collect recidivism data. Although we cannot document complete dates of events due to the confidentiality issues described above, we were able to document month and year of criminal activity allowing us to gain a perspective on post-release arrests, the charges associated with the arrest, and the case disposition in the courts resulting in reincarceration. Since re-arrest and reincarceration are key outcomes, we are attempting to identify and isolate different types of variables to measure the frequency, severity, variety of re-arrests and time interval between subsequent re-arrest/reincarceration. Again, reincarceration is considered the catastrophic outcome variable in this study in terms of costs and program efficacy measures. For each arrest or reincarceration, we will determine frequency, variety, severity, and timing of re-arrest or reincarceration since release. Inspection of the hazard function, which describes the relationship between time at risk and probability of failure (in this case, reincarceration) is useful in identifying critical junctures post-release, when individuals may be most at risk for re-arrest.
Detoxification/Substance Abuse Data
The Department of Public Health (DPH) Bureau of Substance Abuse Services (BSAS) oversees substance abuse prevention and treatment services in the Commonwealth of Massachusetts. DPH/BSAS licenses programs and counselors and also provides funding for treatment services, including detoxification for the indigent and uninsured.3 BSAS collects encounter level data for licensed and contracted treatment services including payment to BSAS contracted providers. For this evaluation, again due to issues with PHI, we were able to leverage the MOU so that BSAS could receive a one-time hand delivered list of the case/comparison dataset to upload outcomes quarterly. All outcome data from BSAS was pulled in-house stripping it of identifiers before returning it to the DOC research technician. Regarding the outcomes, as a measure of substance abuse relapse, BSAS provides data on admissions to detoxification facilities from July 2007 thru June 2011. BSAS also provides non-acute treatment related information. However, the data provided by BSAS does not include admissions to the emergency room (ER), which is a significant point of re-entry to the healthcare system for relapse. Given that the descriptive analysis of the case/comparison data set indicates a high prevalence of dual diagnosis (70% of the open mental health cases and 80% FTT have co-occurring substance use disorders) we can speculate that detoxification admissions indicate relapse. The relationship of substance abuse treatment services and nuances related to type of service and drug use and increased or decreased trajectories to re-arrest and reincarceration are being be studied
Comparative Economic Analysis Issues
Finally, although our design captures costs on major outcome variables, we are missing data on some potentially costly events, including ER visits, psychiatric hospital stays, and shelter use.4 Here again, it was not feasible to partner with all the subsidiary agencies, nor was it expected that all would agree to participate. Indeed, with each new agency partner Institutional Review Board issues emerged related to client confidentiality, and we were forced to limit the scope of our study. We have also found that agency cost estimates are complicated in that they generally produce gross estimates that include institutional overhead less on a per client basis and more on the basis of gross costs associated with staffing. Ultimately we have been left to estimate client costs by case as a function of service type, offense category and sentence length. However, there is little doubt that this process and its associated limitations have caused us to miss potentially important pieces of the entire cost picture.
Methods to Overcome Design Barriers
We have begun to employ a series of statistical methods to address our original study design barriers. For instance, quasi-experimental designs have inherent selection biases. Participant selection bias is unavoidable, particularly in mental health and criminal justice programs. For instance, the FTT program exhibits selection bias because participants must be DMH eligible to partake in FTT services. FTT clients essentially go through several selection filters: (a) administrative; (b) selection (based on severity and duration of mental illness), (c) self selection, and (d) regional differences in FTT utilization versus utilization of routine DMH services. Thus, individuals receiving FTT services may differ from other “open mental health cases” with respect to motivation, mental health status, and perceived service needs. Furthermore, the DMH service authorization process selects the most severely mentally ill based on diagnosis, duration and disability. Given our early findings regarding the simple yet significant differences between the open mental health control group and FTT case group before matching, selection issues leading to descriptive baseline differences are apparent (see for example Table 2). Therefore, for our case/comparison analysis we attempt to control selection through bivariate and multivariate methods, including propensity score matching.
Table 2.
Basic Characteristics of the FTT Group and Comparison Group before Propensity Score Matching
| FTT | Control group | Chi-Square Sig | |
|---|---|---|---|
| Gender | |||
| Male | 66.7% (92) | 64.4% (1,380) | .59 |
| Female | 33.3% (46) | 35.6% (762) | |
| Total | N=138 | N=2,142 | |
| Race | |||
| White | 59.4% ( 82) | 59.1%(1,266) | .31 |
| Af. Am | 23.9%(33) | 18.3% (391) | |
| Hispanics | 15.9% (22) | 21.7% (4)5 | |
| Native Am | .0% (0) | 0.4%(8) | |
| Other | .7% (1) | .6% (12) | |
| Total | N=138 | N=2,142 | |
| Education | |||
| Less than High School | 56.7% (68) | 41.8% (786) | .017* |
| High school/GED | 39.2% (47) | 52%(978) | |
| Higher Ed | 4.2% (5) | 6.1%(114) | |
| Missing | 0.1% (1) | ||
| Total | N=120 | N=1,879 | |
| Marital Status | |||
| Never married | 72.3%(99) | 67.3%(1,413) | .016* |
| Married | 8.8%(12) | 13.3%(279) | |
| Separated | 3.6%(5) | 5%(104) | |
| Divorced | 11.7%(16) | 13.7%(288) | |
| Widower | 0.7%(4) | .6%(23) | |
| Missing | 0.7%(1) | 0.2%(4) | |
| Total | N=137 | N=2,101 | |
| Institutional Security Level Release to Street | |||
| Low Level Security | 15.2%(21) | 27.7%(594) | .003** |
| Mid/Max Level | 84.1%(116) | 72.1%(1,543) | |
| Missing | 0.7%%(1) | 0.2%(4) | |
| Total | N=138 | ||
| Primary Diagnosis Coded | |||
| Thought | 44.7%(51) | 11.8%(232) | .000*** |
| Mood | 48.2%(55) | 75.8%(1,486) | |
| Personality | 4.4%(5) | 1.9%(38) | |
| Others | 2.6%(3) | 10%(196) | |
| Missing | 0%(0) | 0.5%(9) | |
| Total | N=114 | N=1,961 | |
| Governing Offenses: | |||
| Public Order | 13% (18) | 8.2% (176) | .000*** |
| Property | 17.4(24) | 18.3% (392) | |
| Assault/Robbery | 42% (58) | 35.1% (752) | |
| Sex Assault | 7% (10) | 2.6% (56) | |
| Drug Offense | 18% (3.6) | 25.5%(547) | |
| Murder/Manslaughter | 2.2% (3) | 2.1% (46) | |
| Other | 4.3% (6) | 0.8% (170) | |
| Missing | 0.7% (1) | 0.1% (2) | |
| Toal | N=138 | N=2,141 | |
Table 2 describes the basic characteristics of the FTT participants and subjects in the comparison group before we conducted propensity score matching to create case/control groups for analysis. There are significant differences between the two groups. The FTT group is more likely to have (a) lower levels of education (p=.017); (b) never been married (72.3%, p = .016; (c) a thought disorder 44.7% versus 11.8% of those in the comparison group. Conversely, the comparison group is more likely to be mood disordered (75.8%). The FTT group is more likely to be (d) released from medium or maximum custody 84% versus 72% (p=.003); and (e) charged with assault/robbery offenses 42% versus 35% in the comparison group (p=.000). Additionally, other variables such as homelessness and employment history show significant differences. However, we did not include them in Table 2 and the matching sample due to the large number of missing cases in these variables.
Nonetheless, since the FTT and comparison groups have significant overlaps in terms of their overall experiences with correctional, mental health, and public health agencies, propensity score matching is an appropriate method to create a more precise case/“control” comparison. Usually, propensity score matching requires a large sample size, but as mentioned above our case group is significantly smaller than the comparison group. To address this issue we have elected to fold the group who were identified as receiving FTT services before 2007 into the FTT case group. We did some preliminary work to create propensity scores, but found that methods are typically software dependent. For instance, STATA has robust matching methods and SAS has one or two matching methods (e.g. the nearest neighborhood method, which basically matches cases one by one based on scores and then runs the analysis to see whether there are any differences between the groups and their effects on the outcome variables). Additionally, various matching methods are available. Ultimately, we adopted R (statistical software) to run propensity score matches. R uses the Mahalanobis distance matching method which is the most common matching method (Sekhon 2008). The following is the formula we adopted in R ‘s matching algorithm to match distance between two vectors:
S is the sample covariance matrix of X. Thus, it estimates average treatment effect by matching with replacement. The automated matching mechanism matches each subject in the case group with each subject in the comparison group with the M closest control units, as defined by this distance measure (Sekhon 2008). Simply put, the shorter the distance between the two groups, the more similarities. The goal is to match the two groups based on similar characteristics in terms of observed covariates (or distance) in an optimal manner. Ultimately, we will use the matched groups for the case/control comparison.
Missing values represents another issue affecting propensity score matching. Theoretically important variables that reflect the underlying causal processes should be included in the logistic regression to produce a composite propensity score for subsequent matching. However, some of our independent variables have missing values that will subsequently reduce the statistical power of our model and also introduce bias in the resulting sample in a listwise deletion protocol. Here, we balance inclusion of critical variables against maintaining sufficient statistical power. We will also use different imputation methods to derive appropriate values for missing data. Before we decide the exact imputation methods we will use, it is necessary to examine the mechanism of missing values as patterned, random or systematic (Alison, 2001). For instance, we have found diagnosis to be a particularly problematic variable. Our analyses indicate that individuals can have multiple diagnoses across a life course (one case had 35) and diagnoses can change over time with respect to their clinical features. One solution we adopted was to recode diagnosis to major symptom categories of thought, mood, and personality disorders that may be relevant to specific mental health related community treatment and re-entry issues. We also worked with the DOC mental health vendor to review cases with missing/ambiguous diagnoses. Propensity score methods will be used to measure similarity by the likelihood of being observed and/or missing by the probability of observations being observed or missing (Rosenbaum and Rubin 1983). The underlying logic is that since we can calculate each subject’s propensity score regardless whether any values are missing, it is possible to estimate that subjects who have missing values may also have similar propensity scores that are close to those subjects with no missing values. This is especially true when multiple methods are used (see Yan 2005).
Due to issues with the data, we have been forced to think creatively about how best to address the disproportionate size of our case and comparison groups through creating a case group including clients who were ever identified as receiving FTT services outside the study period, into the current 2007–2009 FTT case group. Additionally, we will carve out a portion of the larger comparison group as the control group based on the propensity score derived from a logistic regression for comparisons. Subjects falling beyond three times the standard deviation from the mean score in the left and right tails of the distribution, can be used as a guide for dropping individuals prior to propensity estimation. Additionally, we have a subgroup of individuals who have a history of receiving DMH services and another group with a history of being determined eligible for DMH services without having received them, neither group receiving FTT services, perhaps, because they refused the voluntary services or had their service needs met elsewhere. These groups potentially offer other interesting comparisons.
We treat outcome data comparing time to re-arrest and other outcomes (reincarceration and detoxification admission) as “censored” data, because not all subjects will experience an event by the end of the observation period, and others will have a shorter period at risk in our observation period since they were released later in the period. Analysis of censored data typically draws on a family of techniques, including survival analysis and its multivariate extension Cox Proportional Hazard programs. Survival analysis (“survival” here referring to time from release to a terminal event of interest e.g.—rearrest, reincarceration, relapse) is widely used because of its simplicity and few assumptions about data distribution. We will run survival analysis comparing the FTT group and the subset of the control group with high propensity scores to provide estimates of the timing of new arrests and events by type of offense and diagnosis where feasible. This information is helpful in predicting the likelihood of an outcome event for any period of time.
Finally, we are currently grafting a “cost field” onto our dataset that includes costs since release related to accessing substance abuse services and coming into contact with the criminal justice system. Here a cautionary note is in order, as it seems that accessing substance abuse services can be seen as a marker of relapse and of accruing costs. However, they may also signify utilizing treatment services for both personal and community stability. In one scenario we could hypothesize that cases accruing costs for treatment are less likely to recidivate, but at this point we remain unsure.
Lessons Learned
From the outset, it was apparent that the ideal study design where administrative data would be merged by the research team was not the preferred method of our partnering agencies. While agencies endorsed the goals of the study and initially agreed to participate, when deliverables such a letters of support were requested and MOUs drafted, levels of participation varied. This was reflected in the commitment of agency resources, including time and availability of staff, and debates about target service populations. Primarily, however, it was a function of concerns about client confidentiality and PHI. Some of the obstacles we describe above could have been avoided with better planning and preparation; others, however, are inherent in this type of research. We believe there is value in disclosing both scenarios as well as strategies we implemented to overcome them, both successful and unsuccessful.
Inherent Obstacles
Research that includes data on vulnerable populations, particularly health and criminal justice data, is appropriately subject to rigorous Institutional Review Board standards. We found in working with multiple agencies with various organizational characteristics and multiple Institutional Review Boards (IRBs) that each agency has its own forms, procedures, expectations and standards designed to protect their particular population. We attempted to overcome this obstacle by requesting that agencies agree to a “lead IRB.” In fact, we learned that it was not possible to designate a lead IRB or find a way around the multiple IRB issue and therefore had to push back the study timeline to incorporate the time required to wade through the separate requirements of multiple agencies. After submitting upward of 7 IRB applications, we then found that each agency required different changes, which in some cases meant that amendments had to be made to protocols already approved by other agencies. To address this issue, an interagency group was convened and held conference calls with all the relevant parties to hash out the details related to IRB processes and thereby centralize the process. Thus, another lesson from our study is that multiple IRBs can alter study design unintentionally and investigators need to attend to this.
In Massachusetts, the merger of data from agencies where PHI is an issue with those from agencies in which it is not was further compounded by the fact that substance abuse and mental health services are provided by separate agencies (i.e., DPH and DMH), requiring negotiations with two human subjects committees, each of which maintain and have their own formal standards and informal cultures. The informal cultures of agencies surrounding data and their release create an interesting problem. Mental health service use data are obviously PHI. That said, nothing in the federal statutes regarding release of such datasets for research purposes accords them a higher level of security than data describing other forms of health care. But in our experience, the activist mental health services consumer lobby and their allies within mental health agencies have arguably created a culture surrounding data use that would appear to go beyond what HIPPA requires. And while new regulations being proposed for HIPPA appear to hold the promise of greater efficiency, particularly with respect to multisite studies, there is little in the proposed new regulations that is likely to alter the obstacles we encountered with respect to human subjects issues.
One possible resolution to human subjects issues involving vulnerable populations that we adopted was for each agency to take the steps required by its IRB to protect the confidentiality of protected health information within their facilities before the data are shared across agencies or with researchers. A standard clause across our six IRB approvals is that the dataset cannot be passed to the research team, as it is quarterly, until it is stripped of all identifiers. However, the mechanism through which the data were passed from each agency to the next varied. Some agencies received password protected data online, while others could only receive hand-delivered encrypted discs. We explored having the data stored on a locked secure server at the lead university, but agencies balked at the notion of storing their client data off site. Ultimately, our solution was to pass data across agencies according to agency preference.
Another inherent obstacle to our approach is the variant cultures of data entry by agency. Unavoidable data alignment issues arose largely because some Massachusetts’ state agencies never have occasion to link their data systems in any way. This became particularly challenging for our retrospective approach. Whereas collecting data prospectively can propel and improve data quality, unanticipated retrospective data acquisitions reveal fault lines in data entry and agency culture surrounding data entry and data use, not only by agency, but by facility within agencies as well. If agencies/facilities value and use data, the data are more likely to be assessable, clean and useful. If data entry is not viewed as an essential part of a staff member’s job, and there are no trained administrative staff to facilitate data entry, the data will likely be problematic. Finally, even with retrospective secondary data, human resources and staffing can be an issue. In the present fiscal climate agencies have few resources to provide back-up for staff who leave state or contracted jobs. Vended providers changing hands can also impact shifts in data storage and access. These vacancies and natural transitions in state services leave data gaps, and data entry and cleaning priorities shift (see Table 3 for a complete list of data issues). As much as possible research teams should pilot test/examine the data they want to obtain prior to study implementation to be sure appropriate data checks and balances are in place.5 Additionally, the research design should ultimately provide information and support to each participating agency. For example, the DOC agreed to have a study research assistant/technician onsite and this arrangement has been a boon for study logistics. However, it is understandable that the infrastructure of some agencies is not conducive to such an arrangement due to supervisory, regulatory, and space issues.
Table 3.
List of Sources (or Causes) of Data Issues
| 1. Organizational Differences between Agencies | 2. Organizational Factors (Organizational Process) | 3.Differences between two houses of corrections | 4. Individual Level Factors |
|---|---|---|---|
|
|
|
|
Avoidable Pitfalls
When using administrative data it is important to remember that it is typically not “research ready.” With the benefit of hindsight we would have taken the time to review each agency’s available data, data entry reality and resources – in essence “pilot” our study plan. This was not feasible in this instance given the short time available to submit our grant application and the looming question of the necessity IRB approvals to even access pilot agency data. . Additionally, making client matches was difficult due to aliases and misspelling of names. Our best matching algorithm included dates of birth and last four digits of social security numbers. Yet, not all datasets had that information.
Additionally, methods of data extraction and data meaning can vary within and across agencies. In terms of extraction, recidivism data was pulled manually at the DOC using name and date of birth, while parole information required inmates’ commitment numbers. This example demonstrates the inconsistency even within agencies of gathering information on a single individual, a process that is often fractured. Additionally, there can be inconsistencies across variables. For instance, the DOC only recently began consistently collecting homelessness data (between 2008–2009). The DOC refers to an individual as “homeless” based on their likely housing status upon release. On the other hand, the HOCs, which have collected homeless data for a longer period of time, consider a person to be “homeless” if they were homeless upon entering the HOC. Therefore a “yes” on the variable of homelessness means something different across the HOC and DOC groups and at best can be described as “a risk factor for unstable housing.”
These examples demonstrate just some of the checks and multiple runs that need to be conducted as a result of the different databases, agencies, and information systems that we attempted to “harmonize” in this research project. When each discrepancy was discovered, discussions were initiated between the study team and the agencies involved in order to troubleshoot the issues that arose in attempting to collaborate across systems. With a streamlined system lacking between state agencies, it was expected that such inconsistencies and issues would arise. The investment and attention each group provided to the evaluation enabled us to investigate and address each dilemma as it arose which proved to be the true collaboration between state agencies and highlighted a need for future system integration. Still, our goal was to integrate the data for this project smoothly. We are now fully aware of the challenges in integrating segmented datasets that are designed by markedly different paradigms, purposes, and agency goals. Different government agencies have vastly different organizational character and purposes in their data applications. Thus, different data structures, naming conventions, and data-level representation of objects are used by various agencies to design the datasets even though they sometimes have the same domain coverage.
Finally, different stakeholders in various agencies have their own interests, privacy concerns and consent principles that make data integration difficult to achieve. For instance, as mentioned above due to HIPPA, PHI and IRB standards, our outcome data can only be represented in broad ways including dichotomous or dummy variables or time to the event by month and year thus losing some specificity. However, our dichotomous outcome variables do indicate whether a particular outcome such as rearrest took place or not (yes or no) in a fixed time frame after release. We know whether subjects were rearrested in 6 months (yes or no), 12 months (yes or no) or 24 months (yes or no). Our strategy is to carefully examine the nature of variables and their level of measurement via appropriate data analysis techniques.
There are also well known pitfalls associated with retrospective data. Because none of the data sets were compiled for the primary purposes outlined in the proposal, data cleaning and issues such as missing data need address. The agencies we are collaborating with manage data warehouses that can be accessed with greater or lesser degrees of difficulty to complete missing data fields largely due to our MOU. Additionally, our outcomes are basically restricted to recidivism and substance abuse/relapse. Here we focus on a program of a re-entry service for individuals with mental disorders and its impact on recidivism, with which substance abuse is highly correlated. For the next iteration of this study, we would want to examine mediators and moderators that may or may not impact recidivism including emergency room and shelter use. Emergency room data can be requested from the Massachusetts Division of Health Care Finance and Policy. We learned that that dataset is cumbersome and difficult to navigate and manipulate and had concerns related to our expertise and project scope to adopt another dataset and sub-contracted personnel. Additionally, statewide shelter use data was not available at study design due to a change in agency management. Thus, although this information may be important to study our target population, it is not realistic to obtain comprehensive information using this methodology. We recommend circumscribing study plans when working across multiple agencies to limit inherent complications.
Finally, there are limitations to our comparative economic analysis that are worth noting. Some important and highly relevant outcomes, such as housing supports and ER costs, cannot be evaluated with our data. In addition, not all costs and benefits estimated on Massachusetts state resources and service models may be generalizable to other states. Indeed, the impact of our project, like many that use administrative data, is limited, in part, due to lack of generalizability. On the one hand, the outcomes of this evaluation may not be generalizable to involuntary reentry programs or to programs with different eligibility determination processes or criteria. On the other hand, many of the challenges we encountered in harmonizing administrative datasets to evaluate outcomes of a publicly-funded program, and the lessons learned in attempting to surmount these challenges, may inform similar efforts in other systems.
We suggest minimizing the avoidable pitfalls of administrative data use by: (1) piloting data collection by agency where possible to be cognizant of issues that will arise within and across agencies; (2) creating a shared variable list and data codes with partnering agencies (see Appendix A); (3) limiting/circumscribing the number of variables included in evaluation datasets to variables that will be essential and analyzed; and (4) creating a consortium of vested cooperative and collaborating state agencies for support. Clearly, to adapt to inherent barriers and avoidable pitfalls, agency leadership and willingness to participate is essential. We also found our MOU to be helpful in dealing with data issues. Interagency/research team meetings including en-route preliminary findings kept the project on track.
Conclusion
While our study has been challenging, it remains innovative in many aspects: (1) we are conducting a rigorous scientific evaluation of a long-standing program for community reentry of released prisoners with SMI with no burden to the participants; (2) we successfully undertook an initiative that included a co-operative agreement and MOU between diverse state agencies (DMH, DOC, HOCs, and DPH) to share data that helped facilitate the agency’s ongoing communication beyond the content of this study; (3) we better understand our ability to use public system administrative datasets to understand the impact of services (i.e. the FTT) for vulnerable and multi-problem populations and their use of resources across multiple systems; and (4) the evaluation has allowed us to compare and contrast the nature, timing and patterns of re-offending, and substance abuse relapse (a substantial risk factor for recidivism) for subgroups of open mental health cases released from correctional custody.
The analysis of the merged datasets provides information on “interconnectedness” of individuals with SMI and diverse public health agencies and public safety entities, cross-validates findings from different areas of the re-entry process, and has increased discussions of the importance of data accuracy to improve measurement, synthesize knowledge, and develop explanations for the complex processes involved in enhancing re-entry efforts for a population in great need of services. Without undertaking this study, we would not have known about the feasibility of harmonizing databases and the potential to conduct rigorous analyses of comprehensive programs serving multi-problem populations who utilize an array of services from distinct agencies in the community. By systematically documenting the barriers and facilitators of our collaboration with various state agencies our study is timely, as it relates to the increasing emphasis on electronic health records and the importance of identifying data elements that may be required for routine data collection. Our study goal is to address the emerging public mental health crisis of truncated continuity of care for individuals with severe mental illness involved in the criminal justice system in general, and exiting corrections in particular. Our evaluation findings will, we hope, inform development of policies and practices that not only address the needs of individuals with mental disorders leaving correctional custody, but also provide an analysis of the cost-effectiveness related to aspects of the FTT program, re-entry services, and the utility of data and agency collaborations for program evaluation. It is important to note that by design the FTT was expected to serve only individuals with SMI and DMH service authorization, a small portion of the open mental cases in correctional facilities. Clearly overlapping services are still needed for individuals who do not meet the stringent FTT eligibility criterion and our study will further elucidate the needs of these individuals.
While several states have arrived at an algorithm for integrating data from multiple agencies for use in service planning (see for example Connecticut’s CHIN program), most states have not seen the benefit of undertaking such a dauntingly complex organizational task. The ability to assess target populations and track their interactions with various state agencies over extended periods of time advances evidence-based practices by linking programs and related outcomes over time, enhancing the ability for program and agency self-evaluation and coordination, articulating program needs and social service agendas to policy makers, and developing data standards for greater consistency in the structure and content of databases across state agencies (http://publichealth.uconn.edu/CHIN.php). While it is certainly true that such initiatives are fraught with organizational, agency, and individual challenges, awareness of and approaches to these challenges will improve data use and subsequent evaluation and study design. Nonetheless, for our study, the challenge was to collect quality research data while meeting regulatory demands of protecting individual privacy across various state agencies and to do so without a central repository or algorithm, since the agencies themselves preferred to manage their own data pieces due to internal regulations and HIPPA. While this saved the research team work on the front end, it created new work on the backend to address data alignment and definition issues. When data move from one agency to another, it is critical to find an appropriate solution that can ensure data quality that can meet needs of academic research, satisfy regulatory or organizational demands while maintaining highest degree of protection of privacy. To a certain extent, our study involves health and criminal justice agencies that may make data collection or management even more difficult than clinical data management (a field that has quickly become a major focus of study for bioinfomatics). The challenges we faced were much more complex than we originally anticipated, but not insurmountable. In future studies, researchers should pay special attention to whether and how agencies record their information over time (i.e. consistent procedures around identifiers), how agencies maintain and store their records, whether the task of data management is done internally or outsourced externally and whether these organizational changes have any impacts on the quality of data. Ultimately, we were grateful as the study evolved to have had an MOU at the outset and for the tremendous investment and cooperation in this project, despite the complex fiscal and other challenges each agency faced during the study period.
Supplementary Material
Figure 2.
Data Transition Processa
Note: a. Not all names of datasets are spelled out due to the limited space. In Figure 2, DOC is the Department of Correction, HOC is the House of Corrections, DPH is the Department of Public Health, DMH is the Department of Mental Health, BSAS is the Bureau of Substance Abuse Services, and BOPs are Board of probation records. We already have the financial costs data from DOC and DPH and are in the process of obtaining economic costs data from DMH.
Highlights.
Report describes initiative to merge administrative databases to evaluate a public mental health program for prisoner reentry
Report systematically describes our original study design and the barriers we encountered
Report describes co-operative agreement/MOA between state agencies to share data for the evaluation
Use of public system datasets to evaluate the impact of services for multi-problem populations and their use of resources across multiple systems
Contrasting the nature, timing to re-offending and substance abuse relapse of open mental health cases released from correctional custody
Acknowledgments
This work was funded by NIMH 1RC1MH088716-01. The investigators would like to thank Dr. Martha Lyman, Michael Lupo and Julie White LICSW for their contributions and commitment to this project. Also our heartfelt gratitude to our long standing and excellent research assistants from UMass Boston – Julianne Siegfriedt, Paul Anskat, Phoebe Lehman, Brianna Roach, Jenn Walker, Taylor Hall, and James Wall, and UMass Medical School, Kristen Roy-Bujunowski. Thanks to Kristen again and Paul Benedict for their technical assistance from DMH; Natalya Pushkina, William Saltzman and Ken Nelson for their support from DOC; Andrew Hanchett, Hermik Babakhanlou-Chase, Adam Pojani, and Michael Botticcelli for their support from DPH BSAS; and Dr. Jie Chen from UMass Boston for her continued statistical expertise.
Footnotes
This study was funded by grant # NIMH 1RC1MH088716-01
We recently received permission to get post release DMH hospitalization data which will be uploaded at DMH as another outcome of interest.
See footnote 2 regarding recently granted hospitalization data utilization.
The Massachusetts Alcohol and Substance Abuse Center (MASAC), operated by the DOC provides detoxification and substance abuse treatment for up to 30 days for males civilly committed under MGL 123, section 35, often with outstanding criminal charges. Females, with or without criminal charges in need of detoxification and treatment, may be civilly committed to he Massachusetts Correctional Institution Framingham under MGL 123, section 35.
Although DMH released FTT status for the evaluation, it took longer for DMH to determine and accept the need for inpatient data. We recently (December 2012) were permitted to add outcome period hospital utilization data to our analytic dataset. This will be done on location at DMH by a DMH employee.
Here again, a question arises as to the willingness of agencies to share data to pilot approaches prior to IRB approvals.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Allison P. Missing Data. 1. Thousand Oaks, CA: Sage Publications, Inc; 2001. [Google Scholar]
- Baillargeon J, Binswanger IA, Penn JV, Williams BA, Murray O. Psychiatric Disorders and Repeat Incarcerations: The Revolving Prison Door. American Journal of Psychiatry. 2009;166:103–109. doi: 10.1176/appi.ajp.2008.08030416. [DOI] [PubMed] [Google Scholar]
- Bureau of Justice Statistics. Mental Health and Treatment of Prisoners and Probationers. Screening and eviction for drug abuse and other criminal activity. 2001;66:28776–28806. [Google Scholar]
- Ditton PM. Mental Health and Treatment of Prisoners and Probationers. US Department of Justice, Bureau of Justice Statistics; Washington, D.C: 1999. Retrieved February 3, 2009 ( http://www.ojp.usdoj.gov/bjs/pub/pdf/mhtip.pdf) [Google Scholar]
- Draine J, Herman DB. Critical Time Intervention for Re-entry From Prison for Persons With Mental Illness. Psychiatric Services. 2007;58:1577–1581. doi: 10.1176/ps.2007.58.12.1577. [DOI] [PubMed] [Google Scholar]
- Fisher WH, Silver E, Wolff N. Beyond criminalization: Toward a criminologically-informed mental health policy and services research. Administration & Policy in Mental Health & Mental Health Services Research. 2006;33:544–557. doi: 10.1007/s10488-006-0072-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Author 2004a [Google Scholar]
- Author 2004b [Google Scholar]
- Author 2003 [Google Scholar]
- Author 2001 [Google Scholar]
- Author 1999 [Google Scholar]
- James DJ, Glaze LE. Mental Health Problems of Prison and Jail Prisoners. US Department of Justice; Washington, D.C: 2006. Retrieved January 24, 2009 ( http://www.ojp.usdoj.gov/bjs/pub/pdf/mhppji.pdf) [Google Scholar]
- Massoglia M, Schnittker J. No Real Release. Contexts: Understanding People in Their Social Worlds. 2009;8:38–42. [Google Scholar]
- Morrissey J, Meyer P. Extending assertive community treatment to criminal justice settings. The National GAINS Center for Systemic Change for Justice-Involved People with Mental Illness; 2005. [Google Scholar]
- Pinta E. The prevalence of serious mental disorders among U.S. prisoners. In: Landsberg G, Smiley A, editors. Forensic mental health: Working with offenders with mental illness. Kingston, NJ: Civic Research Institute; 2001. [Google Scholar]
- Rice ME, Harris GT. The Treatment of Mentally Disordered Offenders. Psychology, Public Policy, and the Law. 1997;3:126–183. [Google Scholar]
- Roman J, Brooks, Lagerson E, Chaflin A, Tereschchenko B. Impact and Cost-Benefit Analysis of the Maryland Re-entry Partnership Initiative. Urban Institute Justice Policy Center; 2007. [Google Scholar]
- Rosenbaum P, Rubin D. The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika. 1983;70:41–55. [Google Scholar]
- Sekhon Jasjeet S. Multivariate and Propensity Score Matching Software with Automated Balance Optimization: The Matching package for R. The Journal Statistical Software. 2008;VV(#2):1–47. [Google Scholar]
- Steadman HJ, Osher FC, Robbins PC, Case B, Samuels S. Prevalence of Serious Mental Illness Among Jail Inmates. Psychiatric Services. 2009;60:761–765. doi: 10.1176/ps.2009.60.6.761. [DOI] [PubMed] [Google Scholar]
- Swartz JA, Lurigio AJ. Serious mental illness and arrest: The generalized mediating effects of substance use. Crime and Delinquency. 2007;53:581–604. [Google Scholar]
- Wilson AB, Draine J. Collaborations Between Criminal Justice and Mental Health Systems for Prisoner Re-entry. Psychiatric Services. 2006;57:875–878. doi: 10.1176/ps.2006.57.6.875. [DOI] [PubMed] [Google Scholar]
- Wolff N. Community Reintegration of Prisoners with Mental Illness: A Social Investment Perspective. International Journal of Law and Psychiatry. 2005;28:43–58. doi: 10.1016/j.ijlp.2004.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yan Y. Multiple Imputation for Missing Data: Concepts and New Development (SAS Version 9.0). SAS Institute. Statistics and Data Analysis. 2005:267–277. ( http://support.sas.com/rnd/app/papers/multipleimputation.pdf)
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


