Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 11.
Published in final edited form as: Resuscitation. 2017 Mar 8;114:133–140. doi: 10.1016/j.resuscitation.2017.03.004

The association between physician turnover (the “July Effect”) and survival after in-hospital cardiac arrest

Laura Myers a,b,*, Bassem Mikhael c, Paul Currier a, Katherine Berg b, Anupam Jena c,d, Michael Donnino b,e, Lars W Andersen e,f; for the American Heart Association’s Get with the Guidelines-Resuscitation Investigators
PMCID: PMC6039816  NIHMSID: NIHMS884646  PMID: 28285032

Abstract

Importance

The July Effect refers to adverse outcomes that occur as a result of turnover of the physician workforce in teaching hospitals during the month of June.

Objective

As a surrogate for physician turnover, we used a multivariable difference-in-difference approach to determine if there was a difference in outcomes between May and July in teaching versus non-teaching hospitals.

Design

We used prospectively collected observational data from United States hospitals participating in the Get With The Guidelines®-Resuscitation registry. Participants were adults with index in-hospital cardiac arrest between 2005–2014. They were a priori divided by location of arrest (general medical/surgical ward, intensive care unit, emergency department). The primary outcome was survival to hospital discharge. Secondary outcomes included neurological outcome at discharge, return of spontaneous circulation, and several process measures.

Results

We analyzed 16,328 patients in intensive care units, 11,275 in general medical/surgical wards and 3790 in emergency departments. Patient characteristics were similar between May and July in both teaching and non-teaching hospitals. The models for intensive care unit patients indicated the presence of a July Effect with the difference-in-difference ranging between 1.9–3.1%, which reached statistical significance (p < 0.05) in all but one model (p = 0.07). Visual inspection of monthly survival curves did not show a discernible trend, and no July Effect was observed for return of spontaneous circulation, neurological outcome or process measures except for airway confirmation in the intensive care unit. We found no July Effect for survival in emergency departments or general medical/surgical wards (p > 0.20 for all models).

Conclusions

There may be a July Effect in the intensive care unit but the results were mixed. Most survival models showed a statistically significant difference but this was not supported by the secondary analyses of return of spontaneous circulation and neurological outcome. We found no July Effect in the emergency department or the medical/surgical ward for patients with in-hospital cardiac arrest.

Keywords: Cardiac arrest, Resident education, July Effect, Healthcare quality

Introduction

Each July, trainees at academic hospitals assume new responsibilities; medical students transition to interns who are the primary providers of care and residents assume greater clinical responsibility. The July Effect refers to this period of transition when academic hospitals may have increased adverse events, lower efficiency and possibly worse outcomes.1

Previous studies of the July Effect have found mixed results. While several studies have not found evidence supporting its existence, a 2011 systematic review found that mortality increases and efficiency decreases in teaching hospitals due to changeover of the physician work force.1 This review was, however, limited by heterogeneity of included studies and concerns about data quality. Other investigators have looked for the July Effect in high-risk populations under the assumption that any adverse outcome associated with provider turnover would have the most impact in a high-risk population. A large retrospective cohort study found no evidence of a July Effect in the intensive care unit (ICU),2 whereas other investigators found a higher mortality in high-risk patients admitted for acute myocardial infarctions in July compared to May.3

To our knowledge, no previous studies have assessed whether the July Effect exists for patients experiencing in-hospital cardiac arrest. This is an important population to consider for several reasons. First, building on prior studies evaluating the July Effect among high-risk patient populations, cardiac arrest carries a very high mortality and patient outcomes may be expected to be adversely affected by relative inexperience of resident physicians in July. Second, clear process measures of quality exist for cardiac arrest care, and it is plausible that inexperience with these processes may lead to higher mortality. Therefore, we looked for the presence of a July Effect among patients experiencing in-hospital cardiac arrest in teaching hospitals participating in the Get With The Guidelines®-Resuscitation (GWTG-R) database, a large database of in-hospital cardiac arrest patients.4

Methods

Data source

Data were provided by the GWTG®-R database, which is maintained by the American Heart Association (AHA). GWTG-R is a hospital-based quality improvement program that collects data from hospitals across the United States to assist in research and process improvement initiatives.4 Personnel entering data into the database are required to undergo certification by passing a quality assurance exam that demonstrates proficiency with the database and data entry fields. Earlier studies have described the database’s validity.5,6,7 Hospital level data were obtained from the American Hospital Association Annual Survey from 2013.8

Patient population and hospital designation

We included adult patients ≥18 years old with cardiac arrest between 1/1/2005 and 12/31/2014. We excluded non-index events, events not occurring on the general medical/surgical ward (termed the ‘floor’), ICU, or in the emergency department (ED), events not occurring in May or July, and events with missing data on month, teaching status, or survival. Separate analyses were performed for events occurring on the floor, in the ICU, and in the ED as it was a priori decided that these settings could potentially have different July Effects due to both patient and physician characteristics.

We defined teaching hospitals according to established Association of American Medical Colleges data definitions. Major teaching hospitals are those with Council of Teaching Hospitals (COTH) designation which requires a documented affiliation with a Liaison Committee on Medical Education (LCME)-accredited medical school and sponsorship of at least four approved residency programs, two of which must be in the fields of internal medicine, surgery, obstetrics/gynecology, pediatrics, family practice, or psychiatry.9 Minor teaching hospitals are those approved to participate in residency training by the Accreditation Council for Graduate Medical Educate (ACGME) or the American Osteopathic Association (AOA), or those with medical school affiliation reported to the American Medical Association (AMA).9 Non-teaching hospitals are those without COTH, ACGME, AOA, or AMA affiliation. For the primary analysis, we combined major and minor teaching hospitals and compared them to non-teaching hospitals. In a sensitivity analysis, we performed the same analyses comparing only major teaching to non-teaching hospitals.

Exposure

Our primary exposure of interest was the change in physician personnel occurring during the month of June. As the database does not contain data on physician experience or turnover, we used month of the year as a surrogate for physician turnover as previously done in the literature.2,3,1012 We compared July to May, as the physician transition occurs on different days in different academic institutions during the month of June.

Outcomes

The primary outcome was in-hospital survival. A secondary out-come was neurologic performance at the time of hospital discharge. Neurologic status was assessed using the cerebral-performance category (CPC) score,13 where a score of 1 indicates mild or noneurologic deficit, 2 moderate cerebral disability, 3 severe cerebral disability, 4 coma or vegetative state, and 5 brain death. The CPC score was determined by GWTG-R abstractors reviewing the medical record. We defined neurologic status as binary, with a CPC score of 1 or 2 considered to be a favorable neurologic outcome and a CPC score of 3–5 or death considered a poor neurological outcome, as is done commonly in cardiac arrest research.14 Another secondary outcome was return of spontaneous circulation (ROSC), which is defined by the GWTG-R database as at least 20 min with a palpable pulse.

In addition to these outcome measures, we analyzed several process measures including time to defibrillation for shock ablerhythms,15 time to adrenaline (epinephrine) for non-shock ablerhythms,16 and confirmation of an advanced airway in patients intubated during the cardiac arrest.17 These measures have been studied in relation to cardiac arrest outcomes and are among the AHA priority measures for quality resuscitation. For the assessment of time to defibrillation, we included patients with a known first documented rhythm of ventricular fibrillation or pulseless ventricular tachycardia. For the assessment of time to adrenaline, we included patients with a known first documented rhythm of pulseless electrical activity or asystole. For the assessment of confirmation of advanced airway placement, we included patients not intubated at the beginning of the cardiac arrest and subsequently intubated during the event. Airway confirmation was defined according to the AHA with one or more of the following being documented: waveform capnography, capnometry, exhaled colorimetric monitor, oesophageal detection device, visualization with direct laryngoscopy, or confirmation other than auscultation but with no specific method documented.18 Patients with missing data for quality measures were not included in these analyses.

Statistical analysis

We used descriptive statistics to summarize the study population including medians with 1st and 3rd quartiles for continuous data and counts with relative frequencies for categorical data.

Analyses were performed separately for patients in different locations in the hospital, such as on the floor, in the ICU and in the ED. We assessed the July Effect on patient-level outcomes, utilizing a difference-in-difference approach with in-hospital survival as the primary outcome. This approach compares a change in outcome from May to July in teaching hospitals to a change in outcome from May to July in non-teaching hospitals.19 This approach has been used in a previous study to account for any potential seasonal effects (including measured and unmeasured confounders) between May and July that are similar between teaching and non-teaching hospitals.3

A logistic regression model was fit with the following variables: month (May versus July), hospital type (teaching versus non-teaching), and an interaction between the two. The interaction represents the difference between teaching and non-teaching hospitals in the change in outcomes from May to July, i.e. the July Effect. We first performed an unadjusted analysis. We then performed three multivariable analyses. Because the July Effect is a complex exposure potentially starting as soon as the patient is admitted to the hospital (i.e. before the cardiac arrest), we utilized a stepwise approach to separate out the potential confounding effects of pre-admission characteristics such as age, sex, race and year of arrest (Model 1), pre-cardiac arrest characteristics such as co-morbidities (Model 2), and event characteristics such as time of day/week, whether a hospital wide response was called, whether the event was witnessed and the first documented rhythm (Model 3) (see Table 1 for additional details regarding the variables). We created multiple models because a number of these characteristics (i.e. those happening during the hospital admission) could theoretically be both mediators and confounders of a July Effect. For all multi variable analyses, we used generalized estimating equations with an exchangeable variance–covariance structure to account for within-hospital correlation between patients. Multivariable analyses were repeated with secondary outcomes of favorable neurologic out-come and ROSC. In a sensitivity analysis of the July Effect on survival, we defined teaching hospitals as only those that are major teaching hospitals.

Table 1.

Baseline characteristics according to teaching status and month.

Teaching hospital Non-teaching hospital


May
(n = 10,671)
July
(n = 10,523)
May
(n = 5226)
July
(n = 4973)
Patient demographics
Median age (quartiles) 70 (58, 79) 70 (58, 80) 67 (54, 77) 66 (54, 77)
Female sex 4462 (42%) 4314 (41%) 2223 (43%) 2162 (43%)
Race
  White 7029 (71%) 6824 (71%) 3842 (77%) 3649 (77%)
  Black 2485 (25%) 2492 (26%) 897 (18%) 892 (19%)
  Other 324 (3%) 364 (4%) 232 (5%) 208 (4%)
Admission diagnosis
Medical cardiac 3566 (33%) 3421 (33%) 2075 (40%) 2001 (40%)
Medical non-cardiac 4648 (44%) 4644 (44%) 2296 (44%) 2161 (43%)
Surgical cardiac 793 (7%) 707 (7%) 314 (6%) 303 (6%)
Surgical non-cardiac 1164 (11%) 1224 (12%) 447 (9%) 429 (9%)
Other 496 (5%) 517 (5%) 92 (2%) 75 (2%)
Pre-existing conditions
Heart failure this admission 1723 (16%) 1630 (16%) 961 (18%) 866 (17%)
Heart failure prior to this admission 2277 (21%) 2198 (21%) 1059 (20%) 1011 (20%)
MI this admission 1629 (15%) 1499 (14%) 916 (18%) 893 (18%)
MI prior to this admission 1715 (16%) 1681 (16%) 794 (15%) 762 (15%)
Hypotension 2771 (26%) 2874 (28%) 1326 (25%) 1348 (27%)
Respiratory insufficiency 4632 (44%) 4527 (43%) 2196 (42%) 2043 (41%)
Renal insufficiency 3495 (33%) 3460 (33%) 1824 (35%) 1621 (33%)
Hepatic insufficiency 795 (7%) 826 (8%) 383 (7%) 349 (7%)
Metabolic/electrolyte abnormality 1856 (17%) 1924 (18%) 899 (17%) 814 (16%)
Diabetes 3264 (31%) 3162 (30%) 1641 (31%) 1512 (31%)
Baseline depression in CNS function 1178 (11%) 1185 (11%) 652 (12%) 612 (12%)
Acute stroke 443 (4%) 403 (4%) 204 (4%) 153 (3%)
Acute CNS-event non-stroke 819 (8%) 797 (8%) 368 (7%) 331 (7%)
Pneumonia 1390 (13%) 1316 (13%) 763 (15%) 704 (14%)
Septicemia 1820 (17%) 1784 (17%) 790 (15%) 769 (16%)
Major trauma 572 (5%) 609 (6%) 118 (2%) 116 (2%)
Metastatic/hematologic malignancy 1313 (12%) 1409 (13%) 562 (11%) 530 (11%)
Event characteristics
Time of week: weekend 3444 (32%) 3372 (32%) 1659 (32%) 1536 (31%)
Time of day: nighttime 3528 (33%) 3575 (34%) 1750 (34%) 1658 (34%)
Hospital wide response 7854 (74%) 7645 (73%) 4470 (86%) 4286 (86%)
Witnessed 8746 (82%) 8707 (83%) 4331 (83%) 4083 (82%)
Initial rhythm
  Shockable 1839 (19%) 1843 (19%) 1023 (21%) 951 (21%)
  Non-shockable 8035 (81%) 7834 (81%) 3780 (79%) 3590 (79%)

Age is expressed as median with quartiles in parentheses. All other variables are number with percent in parentheses.

In addition to our analysis of patient outcomes, we used negative binomial regression models to assess quality measures such as time to defibrillation and time to adrenaline administration. For airway confirmation, a logistic regression model as described above was utilized. For these additional outcomes, we report unadjusted results and fully adjusted results (i.e. Model 3). For the adjusted models, we used generalized estimating equations as above.

As a post-hoc analysis, we examined the July Effect in relation to survival and favorable neurologic outcome in those who survived the initial event (i.e. those with ROSC). We assumed that the majority of patients would be transferred to the intensive care unit and we therefore combined the three locations. We used a similar model as above and provide unadjusted results and fully adjusted results (i.e. variables included in Model 3 as well as the location of the arrest [i.e. ICU, ED, or floor]).

All hypothesis tests were two-sided with a significance level of p < 0.05. No adjustments were performed for multiple testing and as such, secondary outcomes should be considered exploratory. We conducted all statistical analyses with the use of SAS software, version 9.4 (SAS Institute, Cary, NC, USA).

Results

Patient and hospital characteristics

There were 262,832 adult in-hospital cardiac arrest records identified in the database, of which 31,393 were included in the final analysis (Fig. 1). Of the 31,393 patients included in the analysis, 16,328 cardiac arrests occurred in the ICU, 11,275 occurred on the floor, and 3790 occurred in the ED.

Fig. 1. Patient inclusion/exclusion.

Fig. 1

The figure shows reasons for exclusion and distribution of arrests on the floor, in the emergency department (ED) and in the intensive care unit (ICU). IHCA = in hospital cardiac arrest.

Baseline data on patient demographics, admission diagnoses, pre-existing conditions, and event characteristics according to hospital teaching status and month are provided in Table 1. Patient characteristics between May and July were indistinguishable in both teaching and non-teaching hospitals. Data were missing on at least one variable included in Table 1 for 2222 patients (13.6%) in the ICU, 1822 patients (16.2%) for the floor, and 662 patients (17.5%) for the ED.

In terms of hospital-level designation, there were 486 hospitals included in this analysis, of which 255 were teaching hospitals (106 major teaching hospitals and 149 minor teaching hospitals) and 231 were non-teaching hospitals.

Primary outcome

For patients on the floor and in the ED, there was no difference-in-difference for survival between May and July in any model (Table 2). However, survival decreased by 1.0% (absolute percent-age points) from 18.6% in May to 17.6% in July in the ICU in teaching hospitals whereas in non-teaching hospitals, survival in the ICU increased by 2.1% from 17.3% in May to 19.4% in July (Table 2). For ICU patients, the unadjusted difference-in-difference value for survival between teaching and non-teaching hospitals was −3.1%, which was statistically significant (p = 0.02, Table 2), indicating the potential presence of a July Effect in this location. These results were largely consistent among the models although Model 2 did not reach statistical significance (p = 0.07, Table 2).

Table 2.

Survival to hospital discharge.

Hospital type Difference-in-difference


Teaching Non-teaching Unadjusted Model 1a Model 2b Model 3c




Value P Value P Value P Value P
ICU
May 1060 (18.6%) 437 (17.3%) −3.1% 0.02 −2.2% 0.02 −1.9% 0.07 −1.9% 0.04
July 1007 (17.6%) 461 (19.4%)
Floor
May 892 (23.2%) 417 (21.6%) 1.3% 0.43 1.3% 0.43 1.7% 0.28 1.5% 0.34
July 861 (23.6%) 382 (20.7%)
ED
May 273 (24.2%) 213 (27.7%) −0.3% 0.92 0.1% 0.96 1.4% 0.59 −2.7% 0.36
July 277 (24.2%) 209 (27.9%)

Survival is expressed as a number and percentage for teaching and non-teaching hospitals by location and month. The difference-in-difference value is shown with correlating p value. A negative difference-in-difference value indicates the presence of a July Effect. The first column is an unadjusted analysis. Models 1–3 represent multivariate analyses accounting for certain variables.

a

Adjusted for age, sex, race, and year of the arrest. Data were missing on 1171 patients (7.2%) for the ICU, 669 patients (5.9%) for the floor, and 315 patients (8.3%) for the ED.

b

Adjusted for the same variables in Model 1 as well as admission diagnosis and pre-existing conditions. Data were missing on 1238 patients (7.6%) for the ICU, 718 patients (6.4%) for the floor, and 329 patients (8.7%) for the ED.

c

Adjusted for the same variables in Model 1–2 as well as time of week, time of day, whether a hospital wide response was called, whether the event was witnessed, and the first documented rhythm. Data were missing on 2220 patients (13.6%) for the ICU, 1822 patients (16.2%) for the floor, and 661 patients (17.4%) for the ED.

We found similar results in the sensitivity analysis comparing major teaching hospitals to non-teaching hospitals. In this analysis, no statistically significant July Effect was observed on the floor or in the ED (Supplemental eTable 1) but there was a statistically significant difference-in-difference value of −3.6% observed in the ICU (p = 0.02) in the unadjusted analysis with largely consistent results in the multivariable analysis (Supplemental eTable 1).

Monthly survival for the entire year is presented in Fig. 2 according to location and teaching status. There did not appear to be an incremental improvement in survival over the course of the year as personnel became more experienced treating cardiac arrest (Fig. 2).

Fig. 2. Monthly survival by location.

Fig. 2

Percent survival is listed on y-axis versus month on x-axis. Data were separated by location. Blue curves with squares represent teaching hospitals and red curves with circles represent non-teaching hospitals. The 95% CI is shown.

Secondary outcomes

The results for the secondary outcomes of favorable neurologic outcome and ROSC are presented in Table 3 and Supplemental eTable 2, respectively. There was no statistically significant July Effect on neurologic outcome or ROSC in unadjusted or adjusted models for any location.

Table 3.

Favorable neurologic outcome at discharge.

Hospital type Difference-in-difference


Teaching Non-teaching Unadjusteda Model 1b Model 2c Model 3d




Value P Value P Value P Value P
ICU
May 689 (12.7%) 298 (12.2%) −1.7% 0.15 −1.8% 0.11 −1.1% 0.27 −1.6% 0.1
July 632 (11.5%) 290 (12.7%)
Floor
May 581 (15.8%) 280 (15.1%) 2.4% 0.11 2.1% 0.17 2.4% 0.08 2.6% 0.04
July 578 (16.6%) 239 (13.5%)
ED
May 192 (17.9%) 152 (20.5%) 1.9% 0.48 2.5% 0.31 3.6% 0.1 −0.1% 0.93
July 210 (19.1%) 142 (19.9%)

Favorable neurologic outcome is considered a CPC score of 1–2 which is expressed as a number and percentage for teaching and non-teaching hospitals by location and month. The difference-in-difference value is shown with correlating p value. A negative difference-in-difference value indicates the presence of a July Effect. The first column is an unadjusted analysis. Models 1–3 represent multivariate analyses accounting for certain variables.

a

Data were missing on 669 patients (4.1%) for the ICU, 510 patients (4.5%) for the floor, and 159 patients (4.2%) for the ED.

b

Adjusted for age, sex, race, and year of the arrest. Data were missing on 1755 patients (10.7%) for the ICU, 1129 patients (10.0%) for the floor, and 458 patients (12.1%) for the ED.

c

Adjusted for the same variables in Model 1 as well as admission diagnosis and pre-existing conditions. Data were missing on 1819 patients (11.1%) for the ICU, 1175 patients (10.4%) for the floor, and 472 patients (12.5%) for the ED.

d

Adjusted for the same variables in Models 1–2 as well as time of week, time of day, whether a hospital wide response was called, whether the event was witnessed, and the first documented rhythm. Data were missing on 2749 patients (16.8%) for the ICU, 2187 patients (19.4%) for the floor, and 778 patients (20.5%) for the ED.

We studied various quality measures including time to defibrillation for shockable rhythms, time to adrenaline administration for non-shockable rhythms and airway confirmation. For time to defibrillation, 4705 patients were included in the analysis. By location, 2508 patients (53%) were in the ICU, 1420 patients (30%) were on the floor, and 777 patients (17%) were in the ED. The median time to defibrillation is presented in Supplemental eTable 3a. The median time to defibrillation in the ICU and ED was 0 min and 1–2 min on the floor (Supplemental eTable 3a). The difference-in-difference from the negative binomial regression model was not significant in adjusted analysis for the ICU (p = 0.87), floor (p = 0.95), or ED (p = 0.75) or multivariable analysis for the ICU (p = 0.89), floor (p = 0.79), or ED (p = 0.82).

For time to adrenaline administration, 19,568 patients were included in the analysis. By location, 10,396 patients (53%) were in the ICU, 7034 patients (36%) were on the floor, and 2138 patients (11%) were in the ED. The median time to adrenaline administration in the ICU and ED was 1–2 min, compared to 3 min on the floor (Supplemental eTable 3a). The difference-in-difference value from the negative binomial regression model was not significant in adjusted analysis for the ICU (p = 0.10), floor (p = 0.21), or ED (p = 0.13) and was also not significant in multivariable analysis for the ICU (p = 0.17), floor (p = 0.08), or ED (p = 0.46).

For airway confirmation, 18,588 patients where included in the analysis of which 13,590 patients (73%) had documented correct airway confirmation. By location, 6392 were in the ICU (34%), 9964 patients (54%) were on the floor, and 2232 patients (12%) were in the ED. The proportion of patients with documented correct airway confirmation according to location, month and teaching status is presented in Supplemental eTable 3b along with the results from the difference-in-difference analysis. There was a statistically significant decrease in airway confirmation seen in the fully adjusted model but not the unadjusted model in the ICU. The difference-in-difference value was −5.6% (p = 0.02) in the adjusted model (Supplemental eTable 3b). There was a weaker association on the floor with a difference-in-difference value of −3.4% and −3.6% in the unadjusted (p = 0.06) and fully adjusted model (p = 0.07), respectively (Supplemental eTable 3b).

Initial survivors

In our post-hoc analysis of 19,380 patients (62%) who achieved ROSC, 6324 patients (33%) survived to discharge. Survival in this cohort was 32.0% and 31.5% in May and July in teaching hospitals and 33.8% and 35.5% in May and July in non-teaching hospitals, respectively. The difference-in-difference was −2.1% in the unadjusted analysis (p = 0.16) and −2.2% in the fully adjusted analysis (p = 0.17). 4174 patients (23%) had favorable neurologic outcome. The proportion of favorable neurologic outcome was 22.6% and 22.5% in May and July in teaching hospitals and 24.5% and 24.3%in May and July in non-teaching hospitals. The difference-in-difference was 0.1% in the unadjusted analysis (p = 0.95) and −0.2%in the fully adjusted analysis (p = 0.91).

Discussion

We analyzed whether in-hospital cardiac arrest outcomes and process quality measures worsened in July compared to May in teaching hospitals versus non-teaching hospitals using a validated database of in-hospital cardiac arrests occurring in U.S. hospitals between 2005 and 2014. Using a difference-in-difference approach, we found no July Effect for survival of cardiac arrest patients on the floor or in the ED. Our secondary analyses were similarly unremarkable, including analyses of ROSC, neurologic outcome and several process measures.

In the ICU, our findings indicated a possible July Effect on in-hospital survival, supported by a statistically significant July Effect in two of three models. However, performing multiple analyses over multiple groups increases the chance of false positive results (i.e. Type I error) so the results should be interpreted with caution. Additionally, we detected no difference in most of the secondary outcomes including ROSC and neurologically intact survival. Furthermore, we found no discernable trend in survival when visually inspecting survival in teaching and non-teaching hospitals over the course of the academic year (Fig. 2). Finally, in a post-hoc analysis of post-cardiac arrest patients presumably in the ICU, we did not detect any differences in outcome measures. Taken together, there could be a July Effect in the ICU but prospective studies would be needed to further investigate this.

Regarding airway confirmation, we found a potential July Effect in the fully adjusted model for the ICU but this was not present on the floor or in the ED. A previous study from GWTG-R found an association between documented airway confirmation and improved ROSC and survival until discharge.17 Similar to that study, there was a non-negligible amount of missing data which could represent endotracheal tubes that were placed successfully but not recorded as such, or endotracheal tubes that were improperly placed or not confirmed. Therefore, these results should be interpreted with caution.

Resident trainees must fulfill the dual responsibilities of learning new clinical skills while actively providing clinical care. Medical educators have used crisis simulation to help prepare residents to be code leaders with the goal of maintaining patient safety.20 Trials have shown that pre-graduate students improve average time to cardiopulmonary resuscitation and defibrillation after a training session, suggesting that resuscitation is a learned skill that can and should be practiced.21,22 Although results from these studies suggest that new, inexperienced providers might perform inferior resuscitation, it is possible that residents can perform protocol-driven procedures in real-world settings after learning the algorithm in simulation or that senior physicians/experienced ancillary personnel are adequate to make up for inexperienced physicians. The National Institute of Health Agency for Healthcare Research and Quality is actively funding trials in this area.23

Our study has several limitations. First, while the difference-in-difference analysis is a powerful tool, we remain unable to adjust for unmeasured characteristics that may change differentially between May and July in teaching versus non-teaching hospitals. For example, there may be differences in patient case-mix, senior physician vacation patterns, or the addition of new graduates from other medical professions such as nursing, pharmacy and respiratory therapy that could have affected our results. As such, it is important to note that we merely present an association that does not necessarily reflect a causal relationship between physician turnover and outcomes. Second, we could not control for the degree of resident/intern involvement in patient care on code teams in teaching hospitals, which may vary significantly from institution to institution and between settings within the same institution. We tried to partly address this by performing our analysis with two different definitions of teaching hospitals, assuming that centers with more training programs would almost certainly have trainees on their code teams. Third, we could not track whether patients were transferred from a non-teaching hospital to a teaching hospital for their post-cardiac arrest care, thus potentially obscuring a July Effect between these hospital settings. Fourth, although we did adjust for the year of the cardiac arrest, outcomes from in-hospital cardiac arrest have improved over the last decade which could partly obscure a July Effect if improvement was differentially seen in teaching hospitals.24

In conclusion, there may be a July Effect in the intensive care unit but the results were mixed. Most survival models showed a statistically significant difference but this was not supported by the secondary analyses of return of spontaneous circulation and neurological outcome. We found no July Effect in the emergency department or the medical/surgical ward for patients within-hospital cardiac arrest.

Supplementary Material

supplement

Acknowledgments

The authors thank Get With The Guidelines®-Resuscitation for data access and Harvard Catalyst for statistical support. Lars W. Andersen had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Funding

There was no funding source.

Conflict of interest statement

Dr. Donnino was a paid consultant for the American Heart Association at the time of submission. He is supported by National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number K24HL127101.

Footnotes

A Spanish translated version of the abstract of this article appears as Appendix in the final online version at http://dx.doi.org/10.1016/j.resuscitation.2017.03.004.

Appendix A. Supplementary data

Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.resuscitation.2017.03.004.

References

  • 1.Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. July effect: impact of the academic year-end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155:309–15. doi: 10.7326/0003-4819-155-5-201109060-00354. [DOI] [PubMed] [Google Scholar]
  • 2.Barry WA, Rosenthal GE. Is there a July phenomenon? The effect of July admission on intensive care mortality and length of stay in teaching hospitals. J Gen Intern Med. 2003;18:639–45. doi: 10.1046/j.1525-1497.2003.20605.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Jena AB, Sun EC, Romley JA. Mortality among high-risk patients with acute myocardial infarction admitted to U.S. teaching-intensive hospitals in July: a retrospective observational study. Circulation. 2014;128:2754–63. doi: 10.1161/CIRCULATIONAHA.113.004074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.American Heart Association. [Accessed 21 July 2015];Get With The Guidelines-Resuscitation Fact Sheet. 2015 http://www.heart.org/idc/groups/heart-public/@private/@wcm/@hcm/@gwtg/documents/downloadable/ucm_434082.pdf.
  • 5.Peberdy MA, Kaye W, Ornato JP, et al. Cardiopulmonary resuscitation of adults in the hospital: a report of 14720 cardiac arrests from the National Registry of Cardiopulmonary Resuscitation. Resuscitation. 2003;58:297–308. doi: 10.1016/s0300-9572(03)00215-6. [DOI] [PubMed] [Google Scholar]
  • 6.Merchant R, Yang L, Becker L, et al. Incidence of treated cardiac arrest in hospitalized patients in the United States. Crit Care Med. 2011;39:2401–6. doi: 10.1097/CCM.0b013e3182257459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Nadkarni VM, Larkin GL, Peberdy MA, et al. First documented rhythm and clinical outcome from in-hospital cardiac arrest among children and adults. JAMA. 2006;295:50–7. doi: 10.1001/jama.295.1.50. [DOI] [PubMed] [Google Scholar]
  • 8.American Hospital Association. American Hospital Association Annual Survey Database Fiscal Year. American Hospital Association Data Viewer; 2013. [Accessed 21 July 2016]. http://www.ahadataviewer.com/book-cd-products/aha-survey/ [Google Scholar]
  • 9.Association of American Medical Colleges Council of Teaching Hospitals and Health Systems. Application for COTH Membership. Association of American Medical College website; [Accessed 21 July 2016]. https://www.aamc.org/download/406486/data/cothapplicationseptember2014.pdf. [Google Scholar]
  • 10.Anderson KL, Koval KJ, Spratt KF. Hip fracture outcome: is there a July effect? Am J of Orthop (Belle Mead, NJ) 2010;38:606–11. [PubMed] [Google Scholar]
  • 11.Englesbe MJ, Fan Z, Baser O, Birkmeyer JD. Mortality in medicare patients under-going surgery in July in teaching hospitals. Ann Surg. 2009;249:871–6. doi: 10.1097/SLA.0b013e3181a501bd. [DOI] [PubMed] [Google Scholar]
  • 12.Smith ER, Butler WE, Barker FG. Is there a July phenomenon in pediatric neuro-surgery at teaching hospitals? J Neurosurg. 2006;105:169–76. doi: 10.3171/ped.2006.105.3.169. [DOI] [PubMed] [Google Scholar]
  • 13.Jennett B, Bond M. Assessment of outcome after severe brain damage. Lancet (Lond, Engl) 1975;1:480–4. doi: 10.1016/s0140-6736(75)92830-5. [DOI] [PubMed] [Google Scholar]
  • 14.Becker LB, Aufderheide TP, Geocadin RG, et al. Primary outcomes for resuscitation science studies: a consensus statement from the American Heart Association. Circulation. 2011;124:2158–77. doi: 10.1161/CIR.0b013e3182340239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Chan PS, Krumholz HM, Nichol G, Nallamothu BK. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008;358:9–17. doi: 10.1056/NEJMoa0706467. [DOI] [PubMed] [Google Scholar]
  • 16.Donnino MW, Salciccioli JD, Howell MD, et al. Time to administration of adrenaline and outcome after in-hospital cardiac arrest with non-shockable rhythms: retrospective analysis of large in-hospital data registry. BMJ Clin Res Ed. 2014;348:g3028. doi: 10.1136/bmj.g3028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Phelan MP, Ornato JP, Peberdy MA, Hustey FM. Appropriate documentation of confirmation of endotracheal tube position and relationship to patient outcome from in-hospital cardiac arrest. Resuscitation. 2013;84:31–6. doi: 10.1016/j.resuscitation.2012.08.329. [DOI] [PubMed] [Google Scholar]
  • 18.Neumar RW, Otto CW, Link MS, et al. American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science. Circulation. 2010;122:S729–67. doi: 10.1161/CIRCULATIONAHA.110.970988. [DOI] [PubMed] [Google Scholar]
  • 19.Dimick JB, Ryan AM. Methods for evaluating changes in health policy: the difference-in-difference approach. JAMA. 2014;312:2401–2. doi: 10.1001/jama.2014.16153. [DOI] [PubMed] [Google Scholar]
  • 20.Sahu S, Lata I. Simulation in resuscitation teaching and training, an evidence based practice review. J Emerg Trauma Shock. 2010;3:378–84. doi: 10.4103/0974-2700.70758. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Langdorf MI, Strom SL, Yang L, et al. High-fidelity simulation enhances ACLS training. Teach Learn Med. 2014;26:266–73. doi: 10.1080/10401334.2014.910466. [DOI] [PubMed] [Google Scholar]
  • 22.Nacca N, Holliday J, Ko PY. Randomized trial of a novel ACLS teaching tool: does it improve student performance? West J Emerg Med. 2014;15:913–8. doi: 10.5811/westjem.2014.9.20149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Brown LL. Improving pediatric resuscitation: a simulation program for the community ED. [Accessed 25 May 2016];Agency for Healthcare Quality Fact Sheet. http://www.ahrq.gov/research/findings/factsheets/errors-safety/simulproj11/index.html. Updated 4/2016.
  • 24.Girotra S, Brahmajee KN, Spertus JA, et al. Trends in survival after in-hospital cardiac arrest. N Engl J Med. 2012;367:1912–20. doi: 10.1056/NEJMoa1109148. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplement

RESOURCES