Abstract
Objective:
To critically evaluate whether admission at the beginning-versus-end of the academic year is associated with increased risk of major adverse outcomes.
Summary Background Data:
The hypothesis that the arrival of new residents and fellows is associated with increases in adverse patient outcomes has been the subject of numerous research studies since 1989.
Methods:
We conducted a systematic review and random-effects meta-analysis of July Effect studies published prior to December 20, 2019, for differences in mortality, major morbidity, and readmission. Given a paucity of studies reporting readmission, we further analyzed 7 years of data from the Nationwide Readmissions Database to assess for differences in 30-day readmission for US patients admitted to urban teaching-versus-non-teaching hospitals with 3 common medical (acute myocardial infarction, acute ischemic stroke, and pneumonia) and 4 surgical (elective coronary artery bypass graft surgery, elective colectomy, craniotomy, and hip fracture) conditions using risk-adjusted logistic difference-in-difference regression.
Results:
A total of 113 studies met inclusion criteria; 92 (81.4%) reported no evidence of a July Effect. Among the remaining studies, results were mixed and commonly pointed toward system-level discrepancies in efficiency. Meta-analyses of mortality (OR[95%CI]: 1.01[0.98–1.05]) and major morbidity (1.01[0.99–1.04]) demonstrated no evidence of a July Effect, no differences between specialties or countries, and no change in the effect over time. A total of 5.98 million patient encounters were assessed for readmission. No evidence of a July Effect on readmission was found for any of the 7 conditions.
Conclusions:
The preponderance of negative results over the past 30 years suggests that it may be time to reconsider the need for similarly-themed studies and instead focus on system-level factors to improve hospital efficiency and optimize patient outcomes.
Keywords: July Effect, July Phenomenon, teaching, resident, mortality, morbidity, readmission
Mini-Abstract
Through a combined random-effects meta-analysis and national multicenter readmission cohort, the study sought to critically evaluate whether admission at the beginning-versus-end of the academic year is associated with increased risk of major adverse outcomes. The preponderance of negative results across 113 included studies and 5.98 million patient encounters demonstrated no evidence of a July Effect, no differences among specialties or countries, and no change in the effect over time.
Introduction
The anticipated existence of a July Effect was first described in 1989 in a paper that found “no substantial increase” in total costs of care at a major teaching hospital early-versus-later in the academic year.1 Despite initial evidence refuting the effect, the concept subsequently emerged as a common term in medical vocabulary. The idea behind the effect alternatively known as the ‘July Phenomenon,’ ‘March Effect,’ ‘August Killing Season,’ and ‘Black Wednesday’ (timing varies with the country of origin), suggests that when new residents and fellows start their training at the beginning of the academic year there is expected to be a rise in the rate of adverse events and medical errors. In the 30 years since 1989, numerous studies have sought to prove and disprove the July Effect’s existence. Some have reported differences in outcomes based on when patients are admitted,2,3 while others have suggested no difference across a growing list of medical and surgical fields.4,5 Still others have reported contradicting evidence from their own initially significant results.6,7
Explanatory mechanisms for the July Effect have yet to be definitely elucidated.8 A common theory is that adverse patient outcomes are likely to be the result of limited clinical knowledge and practical experience.9 Challenges associated with building technical expertise and mastering the flow of patient care on a new service are well recognized within medicine,10,11 and the introduction of new trainees is believed to disrupt usual care. However, it remains unclear whether the influx of new trainees results in increased adverse outcomes for patients. To date, one prior systematic review published in 2011 synthesized results from 39 July Effect studies published between January 1, 1989, and July 31, 2010.8 In the absence of conducting a formal meta-analysis, the summary provided by the authors reported “some evidence” of increases in mortality and decreases in efficiency. Results for morbidity were left inconclusive amidst calls for needed research.8 In the 10 years that followed, the number of July Effect studies more than doubled (Figure 1).
Figure 1.
Cumulative incidence of published (full peer-reviewed manuscript) July Effect studies over the last 30 years from January 1, 1989, to December 20, 2019, (N=113) and corresponding annual frequency of July Effect studies subdivided by manuscript quality. The incidence of yet unpublished abstracts referenced in EMBASE was included in annual frequency totals for illustration purposes. All unpublished abstracts were excluded from the final review. Vertical lines show the publication date of the only prior systematic review.
Given the rapid increase in the number of publications and ongoing debate surrounding the issue, a more contemporary meta-analysis is needed. The objective of this study was to conduct an updated systematic review and meta-analysis of July Effect literature published between 1989 and 2019, critically evaluating whether admission at the beginning-versus-end of the academic year is associated with increased risk of major adverse outcomes including mortality, major morbidity, and readmission among healthcare-seeking medical and surgical patients presenting to teaching hospitals. Given the small number of studies reporting readmission results, we further conducted secondary analyses using 7 years of data from the Nationwide Readmissions Database in order to assess differences in 30-day readmission for United States (US) patients admitted to urban teaching-versus-non-teaching hospitals with 3 common medical (acute myocardial infarction, acute ischemic stroke, and pneumonia) and 4 surgical (elective coronary artery bypass graft surgery [CABG], elective colectomy, craniotomy, and hip fracture) conditions.
Methods
A published protocol for the meta-analysis is available on PROSPERO (CRD42018108903). The Yale Human Investigation Committee provided ethical approval of the combined meta-analysis and readmission analysis.
I. Meta-analysis
Selection of studies
Due to the seasonal nature of the July Effect, eligible studies consisted of observational designs, the majority of which were retrospective cohort studies with or without a non-teaching hospital temporal control. Study participants presented for treatment to one or more teaching hospital(s). The studies compared differences in admission early-versus-later in the academic year, with study-specific definitions allowed as to how the time-periods were defined. Three primary outcomes were assessed: mortality, major morbidity (study-specific definitions), and readmission. Secondary outcomes included documentation of any other outcome measure, most commonly efficiency metrics for factors such as length of stay (LOS), hospital charges/costs, operating-room time, and reported error rates.
Search methods for identification of studies
On December 20, 2019, the electronic databases MEDLINE (PubMed) and EMBASE were queried using controlled vocabulary search-terms selected in conjunction with a research librarian from Yale School of Medicine with expertise in public health. Papers in all languages were sought and translations carried out as necessary. Hand searching of each included paper’s references was used to capture any potential studies missed.
The MEDLINE search-terms included: “new resident*”[tw] OR “July Effect”[tw] OR “July Phenomenon”[tw] OR “Black Wednesday”[tw] OR “resident turnover”[tw] OR “physician turnover”[tw] OR “new physician*”[tw] OR “new trainee*”[tw] OR “new surgeon*”[tw]
The EMBASE search-terms included: (new resident* or july effect or july phenomenon or black Wednesday or resident turnover or physician turnover or new physician* or new trainee* or new surgeon*).mp
Data collection procedures
Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guidelines12 were followed throughout conduct and reporting of the work (Table S1). All titles and abstracts identified by electronic searching were downloaded to EndNote: Version X8. They were exported and uploaded as a combined EndNote library to the online systematic review management software Covidence. In Covidence, two authors with doctoral-level training in both medicine and public health (CZ, DM) independently assessed all titles and abstracts to determine eligibility. Conflicts were settled by consensus in consultation with a third medically-trained author (CS). Full-text copies of potentially relevant articles were obtained and independently reviewed by the same authors in order to ensure that the studies met eligibility criteria. Reasons for exclusion were documented (Figure 2). A data collection form was devised by authors in Qualtrics in order to facilitate data collection. Two authors (CZ, DM) undertook the process of data extraction independently with discrepancies discussed between them and a third author (CS). Extracted data were analyzed using Stata Statistical Software: Version 16.0.
Figure 2.
Schematic of inclusion/exclusion criteria for the meta-analysis and readmission analysis
Assessment of risk of bias
Given the inherently observational nature of included studies, traditional randomized, controlled trial risk of bias assessments were not able to be used. Instead, we developed an alternative topic-specific risk of bias assessment that accounted for the unique temporal nature of the July Effect in observational studies and corresponding overall quality score based on previous work conducted by Sterne et al. for interventions (ROBINS-I),13 Wells et al. (Newcastle-Ottawa),14 and the prior July Effect systematic review.8 Risk of bias criteria are outlined in Table S2. In accordance with traditional risk of bias assessments, each component was scored as likely to be of high risk, unclear risk, or low risk. Component scores were used to determine overall study quality with each included study classified as ‘exemplary,’ ‘good,’ or ‘fair/poor.’
Measures of treatment effect and missing data
Differences in primary outcomes were calculated and reported as odds ratios (OR) with corresponding 95% confidence intervals (95%CI). The presence of risk-adjustment was determined based on the inclusion and availability of data reported in the studies. We did not impute missing outcome data or re-run logistic models. Studies that did not report one or more of the primary outcomes were omitted from the meta-analysis for that outcome.
Assessment of heterogeneity and reporting bias
Considerable statistical heterogeneity attributable to underlying population differences and large sample sizes was expected given the nature of the research question and intentionally broad inclusion criteria. Presence of heterogeneity was assessed via calculation of I-squared statistics and corresponding p-values and, where possible, through the assessment of a priori determined sub-analyses. Publication bias was assessed through visual inspection of forest plots ordered by publication year (looking for evidence of a time-trend) and visual/statistical inspection of funnel plots (Egger tests; Begg tests).
Data synthesis and subgroup analyses
Meta-analyses of primary outcomes and a priori determined sub-analyses were conducted using random-effects models in Stata Statistical Software: Version 16.0 with the ‘metan’ command. Sub-analyses stratified studies by medical specialty (surgical-versus-non-operative), country of origin (US-versus-international), definition of the time-period comparator (last quarter-versus-remainder of the year), and exclusion of lower-quality ‘fair/poor’ studies.
II. 30-day readmission analysis
Study population and data source
Due to limited reporting of readmission in the literature (Figure S1), we conducted secondary analyses of 30-day readmission using 2010–2016 data from the Nationwide Readmissions Database15 (NRD; Figure 2). The NRD is the largest publicly-available all-payer inpatient database in the US inclusive of longitudinal information that when weighted yields national estimates of hospital stays. It includes information on patient encounters for patients of all ages with all forms of insurance and contains data on up to 15 ICD-9/10-CM procedure and 35 ICD-9/10-CM diagnosis codes. Secondary diagnosis codes are used to determine comorbidities. Nationally-weighted data for surviving patients discharged in January-November of each year16 were obtained for all patients with primary diagnosis/procedure codes corresponding to 3 common medical (acute myocardial infarction, acute ischemic stroke, and pneumonia) and 4 surgical (elective CABG, elective colectomy, craniotomy, and hip fracture) conditions.
Explanatory variables, outcome measures, and statistical analyses
Condition-specific differences in admissions early (July-August) versus later (comparator 1: September-June [remainder of the academic year]; comparator 2: April-May [end of the academic year, omitting June as a run-in/washout month since many residency programs begin transitions mid-June]) in the academic year at urban teaching hospitals were compared to those at urban non-teaching hospitals using multivariable logistic difference-in-difference regression (risk-adjusted logistic regression that included terms for teaching status: teaching-versus-non-teaching, time of year: early-versus-late, and an interaction between the two). Presence of a July Effect would suggest higher readmissions early in the academic year at teaching hospitals (i.e. OR for admission early-versus-later among teaching hospitals >1.00) at levels that are greater than those observed in non-teaching hospitals during the same time-period (significant effect modification on the multiplicative scale seen in the interaction term, i.e. OR for the interaction between time of year [early-versus-late] and teaching status [teaching-versus-non-teaching] >1.00).
Dates of readmission were measured from discharge. Patients who were discharged and re-admitted within the same day (presumed inter-hospital transfers) were not counted as readmissions. Those who had previously been admitted with the same condition in the 30 days prior to their index admission were not counted as index admissions. Condition-specific variations in 30-day readmission for teaching and non-teaching hospitals were plotted by month between January 2010 and November 2016 in order to assess for temporal trends.
All models were risk-adjusted for patient age on index admission, gender, and presence of 31 individual Elixhauser comorbidities. The models accounted for NRD design-weights, sampling strata, and clustering of patients within hospitals and were analyzed using robust standard errors. Missing data were handled using complete-case analysis (<0.1% of patient encounters were removed due to missingness).
Results
I. Meta-analysis
Initial queries of the electronic databases returned 2,618 articles (1,291 from MEDLINE and 1,347 from EMBASE; Figure 2). Of these, 672 were removed as duplicates. An additional 52 were not original articles (viewpoints, invited commentaries), 40 were unpublished abstracts (Table S3), and 1,741 were deemed not relevant to the research topic, resulting in a total of N=113 included studies. Overall quality considerations and the list of included studies are presented in Table S4. A searchable collection of information on each included study’s year of publication, first author’s last name, title, medical specialty, country of origin, definition of early-versus-later in the academic year, sample-size, outcomes assessed, and corresponding presence/absence of a July Effect for each outcome is available in Supplementary Data File (uploaded separately as a csv).
In brief, of the 113 included studies (Table S4), 27 were of ‘exemplary’ quality (23.9%), 28 were ‘good’ (24.8%), and 58 were ‘fair/poor’ (51.3%). The majority (n=95; 84.1%) involved US patients. For international studies (n=18), the most frequent publications reported results from Canada (n=3), South Korea (n=3), and the United Kingdom (n=3). Slightly more than one-half (n=63; 55.8%) involved surgical specialties, of which orthopaedics (n=18), neurosurgery (n=17), trauma (n=11), and cardiac surgery (n=5) were the most common. An additional 10 (8.8%) reported results from mixed medical and surgical populations, and 40 (35.4%) involved results from non-operative fields. Among non-operative disciplines, publications related to critical care (n=7), obstetrics (n=6), emergency medicine (n=6), and neurology (n=6) were the most common. Eighty-seven studies (77.0%) looked at changes early-versus-later in the academic year, while 26 (23.0%) considered year-long ‘n-month’ variation. In total, 35 different outcomes and several condition-specific metrics were measured; 9 outcomes from 21 studies (21/113; 18.6%) suggested positive or mixed evidence of a July Effect (Table S5). Ninety-two studies (92/113; 81.4%) reported no evidence of a July Effect.
Mortality
Forty-nine studies provided adequate data to assess mortality. Of these, 7 included multiple populations of which up to 3 from each study were randomly selected using a random-number generator in Stata for inclusion in the meta-analysis (4 contained results from 2 populations, in which case, both populations were included), yielding a total of 56 mortality endpoints (Figure 3A). Assessment of bias found no evidence of publication bias or asymmetry in the results and no evidence of a change in outcomes over time (p>0.05). The meta-analysis showed no evidence of a July Effect on mortality (OR[95%CI] 1.01[0.98–1.05]).
Figure 3.
Random-effects meta-analysis showing the overall relative odds of A) mortality (OR[95%CI]: 1.01[0.98–1.05]) and B) major morbidity (OR[95%CI]: 1.01[0.99–1.04]) early-versus-later in the academic year
Similar findings of a lack of evidence of a July Effect on mortality remained consistent across sub-analyses after removing ‘fair/poor’ quality studies (1.02[0.96–1.08]; Figure S2); limiting studies to those comparing early admissions with admissions from the last academic quarter (1.03[0.96–1.11]; Figure S3A) and remainder of the academic year (0.96[0.94–0.97], a technically statistically significant ‘protective’ effect that might or might not be large enough to warrant a clinically meaningful result; Figure S3B); surgical (1.03[0.97–1.11]; Figure S4A) versus non-operative specialties (0.99[0.96–1.03]; Figure S4B); and US (0.99[0.96–1.01]; Figure S5A) versus international origins (1.06[0.96–1.17]; Figure S5B).
Major morbidity
A total of 40 studies reporting outcomes for morbidity were analyzed; 2 included 2 populations (4 represent 2 studies published by the same author in the same year), resulting in 42 morbidity endpoints (Figure 3B). The meta-analysis showed no evidence of a July Effect on morbidity (OR[95%CI] 1.01[0.99–1.04]). Consistent with the analyses for mortality, no evidence of bias or temporal publication patterns was found for major morbidity (p>0.05) nor was evidence of worse outcomes in any sub-analysis.
Readmission
Five studies reporting outcomes for readmission were included in the meta-analysis (Figure S1). While none reported significant results, yielding an overall lack of evidence of a July Effect (OR[95%CI] 1.00[0.95–1.13]), 3 small studies had positive point estimates with wide confidence intervals17–19 and 1 larger ‘fair/poor’ quality study dominated the results (64.0% weight: OR[95%CI] 1.00[0.89–1.10]).20
II. 30-day readmission analysis
A combined total of 5.98 million observed patient encounters in NRD met inclusion criteria, representing a weighted total of 13.09 million patient encounters nationwide. Totals for each condition and reasons for exclusion are presented in Figure 2. Differences in baseline demographics during index hospitalization for the example conditions acute myocardial infarction (medical) and elective CABG (surgical) are presented in Tables S6–S7. In both populations, differences between urban teaching and non-teaching hospitals were minimal, and changes over calendar time were virtually non-existent.
Variations in 30-day readmission among urban teaching and non-teaching hospitals are plotted by month from 2010–2016 for 3 conditions in Figure 4: acute myocardial infarction, acute ischemic stroke, and hip fracture. Vertical lines denote admissions in July and August each year. While rates of readmissions varied among the conditions ranging from approximately 10–18%, no apparent differences in rates of readmission between urban teaching and non-teaching hospitals were found nor was there evidence of higher rates of readmission in July or August for any condition in any calendar year.
Figure 4.
Changes in 30-day readmission throughout the calendar year among urban teaching (black circles) and non-teaching (grey squares) hospitals, nationally-weighted US data 2010–2016. Vertical lines indicate early hospitalizations in July and August each year.
Risk-adjusted results of the logistic difference-in-difference regression models are presented in Table 1. Whether using a temporal comparator of July-August versus April-May or September-June, no evidence of a July Effect for 30-day readmission was found. In 6 of the 7 conditions, temporal comparisons in both urban teaching and non-teaching hospitals were not significant (e.g. acute ischemic stroke July-August versus September-June: urban teaching OR[95%CI]: 0.99[0.97–1.01]; urban non-teaching 0.99[0.96–1.01]) and did not differ between groups (interaction term: 1.00[0.97–1.04]; p=0.885). For pneumonia, while July-August rates of readmission in urban teaching hospitals were higher than those encountered during the remainder of the year (OR[95%CI]: 1.09[1.06–1.11]); the risk-adjusted seasonal change was smaller than that observed in urban non-teaching hospitals (1.12[1.10–1.15]) suggesting, if anything, that a marginally ‘protective’ effect of presenting to teaching hospitals early in the academic year with pneumonia could exist (interaction term: 0.97[0.94–1.00]; p=0.062).
Table 1.
Difference-in-difference assessment of a July Effect on 30-day readmission for seven common medical and surgical conditions, nationally-weighted US data 2010–2016
30-day readmission among survivors (measured from dale of discharge) | ||||||||||
| ||||||||||
Urban teaching hospitals | Urban non-teaching hospitals | Interaction: hospital teaching status and time of year? | ||||||||
July-August vs September-June | July-August vs September-June | |||||||||
OR | 95%CI | OR | 95%CI | OR | 95%CI | p-value | ||||
| ||||||||||
Medical | ||||||||||
Acute myocardial infarction | 0.98 | 0.96 | 1.00 | 0.98 | 0.96 | 1.00 | 1.00 | 0.97 | 1.03 | 0.854 |
Acute ischemic stroke | 0.99 | 0.97 | 1.01 | 0.99 | 0.96 | 1.01 | 1.00 | 0.97 | 1.04 | 0.885 |
Pneumonia | 1.09 | 1.06 | 1.11 | 1.12 | 1.10 | 1.15 | 0.97 | 0.94 | 1.00 | 0.062 |
Surgical | ||||||||||
Elective CABG | 1.01 | 0.92 | 1.11 | 1.07 | 0.94 | 1.22 | 0.95 | 0.81 | 1.11 | 0.511 |
Elective colectomy | 1.09 | 0.99 | 1.19 | 0.99 | 0.87 | 1.14 | 1.09 | 0.93 | 1.29 | 0.275 |
Craniotomy | 0.98 | 0.93 | 1.04 | 0.96 | 0.87 | 1.05 | 1.03 | 0.93 | 1.14 | 0.585 |
Hip fracture | 1.00 | 0.97 | 1.02 | 0.99 | 0.96 | 1.01 | 1.01 | 0.97 | 1.05 | 0.636 |
| ||||||||||
| ||||||||||
Urban teaching hospitals | Urban non-teaching hospitals | Interaction: hospital teaching status and time of year? | ||||||||
July-August vs April-May | July-August vs April-May | |||||||||
OR | 95%CI | OR | 95%CI | OR | 95%CI | p-value | ||||
| ||||||||||
Medical | ||||||||||
Acute myocardial infarction | 0.97 | 0.95 | 1.00 | 0.97 | 0.94 | 0.99 | 1.01 | 0.97 | 1.05 | 0.708 |
Acute ischemic stroke | 1.00 | 0.97 | 1.03 | 0.97 | 0.94 | 1.01 | 1.02 | 0.98 | 1.07 | 0.317 |
Pneumonia | 1.08 | 1.04 | 1.15 | 1.12 | 1.10 | 1.15 | 0.96 | 0.92 | 1.00 | 0.063 |
Surgical | ||||||||||
Elective CABG | 0.95 | 0.84 | 1.08 | 1.03 | 0.87 | 1.23 | 0.92 | 0.74 | 1.14 | 0.456 |
Elective colectomy | 0.99 | 0.88 | 1.11 | 0.89 | 0.75 | 1.04 | 1.11 | 0.92 | 1.37 | 0.275 |
Craniotomy | 0.97 | 0.89 | 1.06 | 0.85 | 0.76 | 0.95 | 1.15 | 0.99 | 1.32 | 0.050 |
Hip fracture | 0.99 | 0.96 | 1.03 | 0.99 | 0.95 | 1.02 | 1.01 | 0.96 | 1.05 | 0.772 |
Nationally-weighted logistic regression accounted for NRD design weights, sampling strata, and clustering of patients within hospitals.
Models were analyzed using robust standard errors and risk-adjusted for: patient age on index hospital admission, gender, and presence of Elixhauser comorbidities.
December discharges in each year were omitted in order to allow for a complete 30-day follow-up for included patients.
Two-sided p-values<0.05 were considered significant.
Discussion
Over the last 30 years, the concept of the July Effect has garnered substantial attention in both the lay media and medical literature. Despite continued growth of publications on this topic, there has not been a critical evaluation across a range of medical and surgical conditions. Our comprehensive meta-analysis identified 113 studies published between 1989 and 2019. Collectively, they demonstrate no evidence of a July Effect on mortality, major morbidity, or readmission. Stratified analyses showed no evidence of worse patient outcomes early in the academic year in either medical or surgical specialties within the US or in the published outcomes of countries around the globe. Moreover, evidence of the July Effect has not changed over time (e.g. there has been no progression in the published literature from early significant to more recent non-significant results). Secondary analyses of national US billing claims revealed no evidence of a July Effect on 30-day readmission for 3 common medical (acute myocardial infarction, acute ischemic stroke, and pneumonia) and 4 surgical (elective CABG, elective colectomy, craniotomy, and hip fracture) conditions.
The majority of studies included in the meta-analysis reported no evidence of a July Effect. Results from the remaining studies were commonly mixed and primarily pointed toward system-level discrepancies in efficiency as opposed to increases in adverse outcomes. For example, between 1989–2019, 4 studies reported increased frequencies/rates of documented near-miss, medication error, or other ‘undesirable’ events;21–24 1 discussed prolonged door-to-disposition times in non-emergent obstetric admissions;25 and another pointed toward reduced efficiencies in Emergency Department work flow.26 Such findings are in keeping with a prior systematic review published a decade earlier which noted evidence of “efficiency decreases” in the literature.8
The lack of evidence of a July Effect for major adverse outcomes could be attributable to a variety of factors including: increased vigilance among senior staff supervising new trainees, assistive coordination provided by nurses and other members of the clinical team, increased caution among trainees, and/or a lack of direct access to patients when trainees begin. The overall consistency of the negative results argues against the suggestion that temporal changes in resident management have contributed to the lack of an association. They could, however, indicate that the July Effect as currently conceptualized does not exist. An overall lack of significant findings in the published literature seems to suggest that much of the discussion surrounding the ongoing debate is likely to be the result of a widely-held perception that a July Effect exists. While not supported by published literature, such a conclusion could reasonably stem from anecdotal experience and years of pragmatic learned caution in training and clinical practice.
Taken together, the findings suggest that it may be time to pause the ongoing investigation of similarly-themed research and to instead pivot toward other aspects of systemic efficiency that contribute to patient care. Future investigations of system-level differences in seasonal outcomes when conducted are encouraged to incorporate non-teaching hospital or other non-trainee involved temporal controls in order to account for spurious findings that arise due to seasonal trends (e.g. the distinction for pneumonia readmission seen in the results). Building on the findings of the past 30 years, future researchers are encouraged to address identified remaining seasonal issues such as delays in non-emergent service provision25 and inaccuracies in note-writing or prescribing orders21–24 that could be affecting the experience, if perhaps not the outcomes, of patient care. They could also be warranted to explore whether or not a meaningful protective effect exists for mortality early in versus during the remainder of the academic year.
There are limitations to our study. Due to the nature of the research question, no randomized, controlled trials have been published. Included observational studies intentionally spanned a wide array of patient populations, many with considerable sample size. As a result, they exhibited considerable heterogeneity and demonstrated mixed quality results. Incorporation of random-effects, a priori determined sub-analyses including exclusion of fair/poor quality studies, and assessment of publication bias/trends over time help to limit this issue. Assessment of NRD is subject to limitations of administrative data. In the NRD, hospital readmissions are not tracked across states, and records do not extend beyond the end of a given calendar year.16 For this reason, hospital discharges (and admissions) in December were excluded in order to ensure that full 30-day readmission rates for each included month could be obtained.
The results of our study provide a comprehensive review of the published literature, demonstrating a lack of evidence of a July Effect within teaching hospitals on major adverse outcomes, including mortality, major morbidity, and readmission. They illustrate that the lack of evidence has not changed over the time and that it does not differ among specialties or countries. Instead of repeating similar studies and developing additional July Effect interventions,27–29 the results suggest that researchers, hospital administrators, and residency/fellowship program directors might be better served targeting their attention and quality improvement efforts toward aspects of efficiency experienced by patients throughout the academic year that are likely to influence perceptions of the July Effect, including the timeliness and quality of clinical care.
Supplementary Material
Acknowledgments
Conflicts of interest and sources of funding: The authors declare that we have no conflicts of interest relevant to the analysis to report. Cheryl K Zogg, MSPH, MHS, is supported by NIH Medical Scientist Training Program Training Grant T32GM007205. She is the PI of an F30 award through the National Institute on Aging F30AG066371 entitled “The ED.TRAUMA Study: Evaluating the Discordance of Trauma Readmission And Unanticipated Mortality in the Assessment of hospital quality.”
References
- 1.Buchwald D, Komaroff AL, Cook EF, Epstein AM. Indirect costs for medical education. Is there a July phenomenon? Arch Intern Med. 1989;149(4):765–768. [PubMed] [Google Scholar]
- 2.Mims LD, Porter M, Simpson KN, Carek PJ. The “July Effect”: A look at July medical admissions in teaching hospitals. J Am Board Fam Med. 2017;30(2):189–195. [DOI] [PubMed] [Google Scholar]
- 3.Phillips DP, Barker GEC. A July spike in fatal medication errors: A possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774–779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Myers L, Mikhael B, Currier P, et al. The association between physician turnover (the “July Effect”) and survival after in-hospital cardiac arrest. Resuscitation. 2017;114:133–140. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Hennessey PT, Francis HW, Gourin CG. Is there a “July Effect” for head and neck cancer surgery? Laryngoscope. 2013;123(8):1889–1895. [DOI] [PubMed] [Google Scholar]
- 6.Englesbe MJ, Fan Z, Baser O, Birkmeyer JD. Mortality in Medicare patients undergoing surgery in July in teaching hospitals. Ann Surg. 2009;249(6):871–876. [DOI] [PubMed] [Google Scholar]
- 7.Englesbe MJ, Pelletier SJ, Magee JC, et al. Seasonal variation in surgical outcomes as measured by the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP). Ann Surg. 2007;246(3):456–462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July Effect”: Impact of the academic year-end changeover on patient outcomes: A systematic review. Ann Intern Med. 2011;155(5):309–315. [DOI] [PubMed] [Google Scholar]
- 9.Blough JT, Jordan SW, De Oliveria GS Jr, Vu MM, Kim JYS. Demystifying the “July Effect” in plastic surgery: A multi-institutional study. Aesthetic Surg J. 2018;38(2):212–224. [DOI] [PubMed] [Google Scholar]
- 10.Riml S, Larcher L, Kompatscher P. Complete excision of nonmelanotic skin cancer: A matter of surgical experience. Ann Plast Surg. 2013;70(1):66–69. [DOI] [PubMed] [Google Scholar]
- 11.Budäus L, Sun M, Abdollah F, et al. Impact of surgical experience on in-hospital complication rates in patients undergoing minimally invasive prostatectomy: A population-based study. Ann Surg Oncol. 2011;18(3):839–847. [DOI] [PubMed] [Google Scholar]
- 12.Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: A proposal for reporting. J Am Med Assoc. 2000;283(15):2008–2012. [DOI] [PubMed] [Google Scholar]
- 13.Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Wells G, Shea B, O’Connell D, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. 2019. Available from: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp. Accessed July 14, 2020.
- 15.Agency for Healthcare Research and Quality. Overview of the Nationwide Readmissions Database (NRD). 2020. Available from: https://www.hcup-us.ahrq.gov/nrdoverview.jsp. Accessed July 14, 2020.
- 16.Zogg CK, Pawlik TM, Haider AH. Three common methodological issues in studies of surgical readmission rates: The trouble with readmissions. JAMA Surg. 2018;153(12):1074–1076. [DOI] [PubMed] [Google Scholar]
- 17.Bresler AY, Bavier R, Kalyoussef E, Baredes S, Park RCW. The “July Effect”: Outcomes in microvascular reconstruction during resident transitions. Laryngoscope. 2020;130(4):893–898. [DOI] [PubMed] [Google Scholar]
- 18.Lin Y, Mayer RR, Verla T, Raskin JS, Lam S. Is there a “July Effect” in pediatric neurosurgery? Childs Nerv Syst. 2017;33(8):1367–1371. [DOI] [PubMed] [Google Scholar]
- 19.Murtha TD, Kunstman JW, Healy JM, Yoo PS, Salem RR. A critical appraisal of the July Effect: Evaluating complications following pancreaticoduodenectomy. J Gastrointest Surg. 2019. [Epub ahead of print]. [DOI] [PubMed] [Google Scholar]
- 20.Kirshenbaum EJ, Blackwell RH, Li B, et al. The July Effect in urological surgery - Myth or reality? Urol Pract. 2019;6(1):45–51. [DOI] [PubMed] [Google Scholar]
- 21.A DPP July spike in fatal medication errors: A possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774–779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Shah AY, Abreo A, Akar-Ghibril N, Cady RF, Shah RK. Is the “July Effect” real? Pediatric trainee reported medical errors and adverse events. Pediatr Qual Saf. 2017;2(2):e018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Haller G, Myles PS, Taffé P, PernegeTr V, Wu CL. Rate of undesirable events at beginning of academic year: Retrospective cohort study. BMJ. 2009;339(7727):b3974. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Chow KM, Szeto CC, Chan MHM, Lui SF. Near-miss errors in laboratory blood test requests by interns. QJM. 2005;98(10):753–756. [DOI] [PubMed] [Google Scholar]
- 25.Mehra S, Gavard JA, Gross G, Myles T, Nguyen T, Amon E. Door to disposition times for obstetric triage visits: Is there a July Phenomenon? J Obstet Gynaecol. 2016;36(2):187–191. [DOI] [PubMed] [Google Scholar]
- 26.Bahl A, Hixson CC. July Phenomenon impacts efficiency of emergency care. West J Emerg Med. 2019;20(1):157–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Levy K, Voit J, Gupta A, Petrilli CM, Chopra V. Examining the July Effect: A national survey of academic leaders in medicine. Am J Med. 2016;129(7):754. [DOI] [PubMed] [Google Scholar]
- 28.Phillips E, Harris C, Lee WW, et al. Year-end clinic handoffs: A national survey of academic internal medicine programs. J Gen Intern Med. 2017;32(6):667–672. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Cohen ER, Barsuk J, Moazed F, et al. Making July safer: Simulation-based mastery learning during intern boot camp. Acad Med. 2013;88(2):233–239. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.