Abstract
Purpose
This study investigated the associations between program-level variables such as organizational structure, workload, and learning environment and residents’ development of depressive symptoms during internship.
Method
Between 2012 and 2015, 1,276 internal medicine interns from 54 U.S. residency programs completed the Patient Health Questionnaire (PHQ-9) before internship, and then quarterly throughout the internship. The training environment was assessed via a resident questionnaire (RQ) and average weekly work hours. The authors gathered program structural variables from the American Medical Association Fellowship and Residency Electronic Interactive Database (FREIDA online); and program research rankings from Doximity. Associations between program-level variables and change in depressive symptoms were determined using stepwise linear regression modeling.
Results
Mean program PHQ-9 scores increased from 2.3 at baseline to 5.9 during internship (mean difference 3.6, SD 1.4, P < .001), with the mean increase ranging from −0.3 to 8.8 (interquartile range 1.1) among the programs included. In multivariable models, faculty feedback (β −0.37, 95% CI −0.62, −0.12, P = .005), learning experience in in-patient rotations (β −0.28, 95% CI −0.54, −0.02, P = .030), work hours (β 0.34, 95% CI 0.13, 0.56, P = .002), and research ranking position (β −0.25, 95% CI −0.47, −0.03, P = .036) were associated with change in depressive symptoms.
Conclusions
Poor faculty feedback and inpatient learning experience, long work hours, and high institutional research rankings were associated with increased depressive symptoms among internal medicine interns. These factors may be potential targets for interventions to improve wellness and mental health among these professionals.
Many medical residents experience a marked increase in depression and other mental health problems during residency training.1,2 Depression in residents is associated with suicidal ideation,3–5 motor vehicle accidents,6 medical errors,1,3,7,8 and lower adherence to safety and practice standards,3 indicating that depression among residents has negative consequences for both residents and their patients.
A large body of work has identified individual-level factors that are associated with depression in resident physicians.1,8,9 Specifically, female gender,1,10 high neuroticism,1,10,11 perceived medical errors,1,3,7,8 stressful life events,1,12 and low subjective well-being13 are consistently associated with depression during residency training.
In addition to individual factors, residency programs are likely to play a critical role in the mental health of resident physicians. The clinical learning environment within residency programs has been linked to the quality of resident education,14 performance,15 and well-being.16 A small number of factors related to residency programs have been associated with resident depression, including high effort and low reward imbalance17–19 and a low degree of job autonomy.20 However, to our knowledge no studies to date have systematically assessed a large sample of residency programs to identify different program-level factors associated with depression.
Here, we conduct a prospective, longitudinal study to assess first-year residents attending 54 internal medicine programs in the United States to investigate the associations between program-level measures of organizational structure, workload, and learning environment and resident depressive symptoms.
Method
Study setting and participants
As part of the Intern Health Study, a prospective longitudinal cohort study of depression and stress during medical internship, individuals who were either graduating from medical school or entering residency at participating institutions were invited to participate in the study.1 In total, we invited 3,317 individuals entering internal medicine programs during the 2012, 2013, 2014, and 2015 academic years to participate in this prospective cohort study via e-mail, two months prior to commencing their internships. E-mail invitations for 82 individuals were returned as undeliverable. A total of 1,941 of the 3,235 remaining individuals who belonged to 239 different internal medicine programs agreed to participate in the study, and returned the baseline assessment (overall response rate: 60.0%). To ensure a sufficient number of subjects providing data for each program in a given year, only programs with a minimum of five interns completing at least one of the four follow-up surveys were included. Further, programs were included only if they were listed in the American Medical Association Fellowship and Residency Electronic Interactive Database (FREIDA online).21 A total of 54 programs and 1,276 interns met the criteria, and were therefore included in this study. The University of Michigan institutional review board approved the study. All participants provided informed consent and were compensated $50 each.
Data collection
We conducted all surveys through a secure online website designed to maintain confidentiality, with participants identified only by non-decodable identification numbers. The procedures we used, those from the Intern Health Study, have been detailed in a previous publication.1
Baseline assessment.
Participants completed an online survey two months prior to commencing internship, which included questions about their age; sex; self-reported history of depression; neuroticism;22 and early life stress;23 as well as an assessment of their depressive symptoms using the Patient Health Questionnaire-9 (PHQ-9) (see Supplemental Digital Appendix 1, available at [LWW INSERT LINK], which includes a copy of the survey instruments). The PHQ-9 is a component of the Primary Care Evaluation of Mental Health Disorders inventory (PRIME-MD), and measures nine self-reported items designed to screen for depressive symptoms.24 For each item, the respondent indicates whether, during the previous two weeks, the depressive symptom had bothered him/her “not at all,” “less than half the days,” “more than half the days,” or “nearly every day.” A score of 10 or greater on the PHQ-9 has a sensitivity of 93% and a specificity of 88% for the diagnosis of major depressive disorder.25
Within-internship assessments.
We contacted interns via e-mail at months 3, 6, 9, and 12 of the internship year and asked them to complete the PHQ-9 and a survey1 that inquired about duty hours (“How many hours have you worked in the past week?”). In addition to assessing depressive symptoms and duty hours, the 12-month survey assessed workload satisfaction and learning environment ratings of residency programs through a resident questionnaire (RQ)26 (see Supplemental Digital Appendix 1, available at [LWW INSERT LINK), which includes a copy of the survey instruments). Workload satisfaction (8 items, alpha = 0.85) and learning environment (9 items, alpha = 0.84) components of the RQ have shown to be a valid measure to capture different aspects of residents’ perspectives of their programs.26 The workload satisfaction scale contains items related to call schedule; caseload; excess load; time to read; clerical and administrative support; hospital support services; time demands; and workups. The learning environment scale includes items related to faculty feedback, counseling, and support; learning experience during in-patient rotations and scheduled conferences; instructions received; and cooperation among residents. For each item, we asked interns to indicate whether they agreed with the statements of the instrument using a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” RQ completion was not required for participants’ inclusion in the analyses. Each of the 54 programs assessed by the present study included at least three interns who completed the RQ in the 12th month of their internship, with a range of 3 to 65 interns (interquartile range [IQR] = 12) among the different programs.
Organizational structure of residency programs.
We collected residency program information about size (number of residency positions), number of faculty, proportion of full time faculty, average hours of scheduled lectures/conferences per week during first year, and whether the program offers awareness and management of fatigue in residents/fellows from FREIDA online.21 Data available on FREIDA online comes primarily from the National Graduation Medical Education Census (GME Track), which is an annual online survey conducted by the American Medical Association and the Association of American Medical Colleges.21 In cases where size information for programs could not be accessed from FREIDA online, we sought for this information from the institutions’ websites.
We extracted information regarding each program’s research ranking position on March 17, 2017, from Doximity, an online professional network for physicians in the United States.27 Doximity is currently the largest networking community of physicians in the United States, with more than 70% enrollment.28 Doximity calculates a program research output score for each residency program, using the collective h-index of publications authored by alumni graduating within the past 15 years, as well as research grants awarded.27 The research ranking for each program is determined by comparing research output scores within the same specialty.27
Statistical analyses
Program-level prevalence and change in depressive symptoms.
To estimate the mean prevalence of depression within residency programs, we determined the number of individuals who scored 10 or higher in the PHQ-9 at one or more quarterly assessments. Changes in the depressive symptoms of individual participants (PHQ-change) were calculated by subtracting the baseline score in PHQ-9 from the mean score in PHQ-9 on the quarterly assessments (PHQ-change = mean PHQ-9 at 3-, 6-, 9- and 12-month assessments - Baseline PHQ-9). We calculated mean PHQ-change for each program to estimate program-level changes in depressive symptoms. The significance of the mean change in scores from baseline to internship year for program-level PHQ-9 depressive symptoms was assessed through a paired t-test.
Extraction and transformation of program-level variables.
In addition to the change in mean for depressive symptoms, for each program, we calculated average duty hours and mean RQ scores on learning environment, workload satisfaction, and individual-level items. In order to control for individual-level factors, we included the factors previously shown to be associated with depression during internship (female gender, baseline PHQ depressive symptoms, childhood stress, and neuroticism).1
Kolmogorov–Smirnov normality tests were conducted for all numerical variables, with square-root transformation applied to variables that did not present a normal distribution. We excluded the variable “offering awareness and management of fatigue in residents/fellows” from the analysis, since all programs included in this study reported “yes” for this variable on FREIDA online.
Stability of program-level variables across cohorts.
To determine whether program effects were stable across different cohorts of interns attending the same residency programs, we used Pearson correlations to assess the associations of program-level measures of change on depressive symptoms, learning environment, and workload across initial (2012–2013) and later (2014–2015) cohorts.
Program-level predictors of change in depressive symptoms.
To identify program-level variables associated with change in depressive symptoms within residency programs, we first used Pearson correlations to identify which variables were correlated with mean change in depressive symptoms. Subsequently, significant variables were entered into a stepwise linear regression model to identify significant predictors while accounting for collinearity among variables.
Additionally, to assess whether significant program-level associations were driven by differences between subjects that were present before the start of the internship, we conducted a multivariable stepwise linear regression analysis, including individual-level variables previously associated with change in depressive symptoms.1
Cross-cohort predictors of depressive symptoms.
To accurately estimate the effect size of program-level predictors of change in depressive symptoms, we performed a two-step secondary analysis using data from 2012 and 2013 cohorts as a training set to identify significant predictors (2012–2013 training), and data from 2014 and 2015 cohorts as a test set for our predictions (2014–2015 test).
First, we conducted Pearson correlations to identify which program-level variables were associated with mean change in depressive symptoms in the 2012–2013 training set, and entered all significant variables into a stepwise linear regression model. As a second step, we tested the regression model constructed using the 2012–2013 training set to predict mean change in depressive symptoms in the 2014–2015 test set. By testing whether significant program-level predictors identified for one cohort of program interns predicted change in depressive symptoms for a different cohort of interns in the program, this cross-cohort analysis has the potential of providing a more accurate estimate of the effect of program-level predictors of change in depressive symptoms, while minimizing the possible influences that interns’ individual characteristics could have on their ratings of the program variables assessed.
All analyses were performed using SPSS statistical software, version 21.0 (SPSS Inc, Chicago, Illinois).
Results
Representativeness of the sample
A total of 1,276 individuals from 54 programs participated. Compared to the set of all internal medicine programs registered on FREIDA, programs included in this study had a similar gender distribution (581/1,276, 45.5% versus 42.2% of females), but had a larger average program size (123 versus 55 residents).29 The number of individuals included per program ranged from 5 to 101. Characteristics of residents and programs included in the study are presented in Table 1.
Table 1.
Participant and Program Characteristics, From a Multi-Residency Study of Training Environment and Depression, 2012–2015
| Characteristic | Measure |
|---|---|
| Participants (N = 1,276) | |
| Cohort, no. (%) | |
| 2012 | 307 (24.1) |
| 2013 | 256 (20.1) |
| 2014 | 233 (18.3) |
| 2015 | 480 (37.6) |
| Women, no. (%) | 581 (45.5) |
| Age, mean (SD) | 27.2 (2.5) |
| Programs (N = 54) | |
| Type of hospital, no. (%) | |
| University-based | 48 (88.9) |
| Community-based | 6 (11.1) |
| Research output ranking positions, range (median) | 1 – 278 (32.5) |
| Country region, no. (%) | |
| West | 8 (14.8) |
| Midwest | 12 (22.2) |
| Northeast | 19 (35.2) |
| South | 15 (27.8) |
Program-level prevalence and change in depressive symptoms
Considering the criteria for major depression developed by Kroenke et al25 (PHQ-9 ≥ 10), the mean program-prevalence of individuals who met such criteria for depression at one or more quarterly assessments was 36.6%, (SD = 17.8%), with prevalence rates ranging from 0.0% to 80.0% (IQR = 15.4%) among the 54 programs included in the full study set (2012 to 2015, N = 1,276).
The mean program score on PHQ-9 changed from 2.3 (SD = 0.8) pre-internship to 5.9 (SD = 1.8), 6.0 (SD = 1.5), 5.9 (SD = 2.1), and 5.5 (SD = 1.8) at 3, 6, 9, and 12 months of internship, respectively. For 53 of 54 programs, mean scores of depressive symptoms increased from baseline to quarterly assessments. The mean program-level change in depressive symptoms from pre-internship to internship was 3.6 points (SD 1.4, paired t-test 19.5, P < .001), with a range of −0.3 to 8.8 (IQR = 1.1) among the different programs included.
Stability of program-level variables across cohorts
To evaluate whether workload, learning environment, and change in depressive symptoms were stable across different cohorts of interns attending the same residency programs, we performed correlational analysis for the 49 programs (N = 1,248) across 2012–2015 cohorts. Program-level changes in depressive symptoms (r = .30, P = .037), and ratings of workload (r = .61, P < .001) and learning environment (r = .34, p = .032) were significantly associated across cohorts. This suggests that effects of programs on resident mental health, workload, and learning environment were relatively stable across time, supporting the subsequent investigation of program-level variables with resident depressive symptoms.
Predictors of program-level change in depressive symptoms
Table 2 presents the associations of program-level variables with change in depressive symptoms for the full study set (2012 to 2015).
Table 2.
Univariate Regression Coefficients of Program-Level Variables Associated With Changes in Depressive Symptoms, From a Multi-Residency Study of Training Environment and Depression, 2012–2015
| Variable | Regression coefficient (P value) |
|---|---|
| Proportion of females | 0.20 (.15) |
| Mean age | 0.24 (.81) |
| Mean duty hours | 0.38 (.005)a |
| Program size | 0.23 (.10) |
| Proportion faculty per resident | −0.06 (.70) |
| Proportion of full-time faculty | −0.10 (.51) |
| Lecture hours | −0.03 (.86) |
| Research ranking position | −0.27 (.047)a |
| Satisfaction with caseload | −0.36 (.007)a |
| Heavy call schedule | 0.18 (.19) |
| Reasonable time demands | −0.37 (.006)a |
| Satisfaction with hospital support services | −0.39 (.003)a |
| Reasonable number of workups on call days | −0.38 (.005)a |
| Satisfaction with administrative support | −0.30 (.025)a |
| Excessive workload | 0.35 (.01)a |
| Lack of time to read | 0.32 (.017)a |
| Timely and appropriate faculty feedback | −0.54 (<.001)a |
| Satisfactory learning experience on scheduled conferences | −0.25 (.07) |
| Satisfactory learning experience in in-patient rotations | −0.36 (.007)a |
| Sufficient counseling from faculty on career planning | −0.32 (.02)a |
| Adequate degree of responsibility | −0.27 (.046)a |
| Satisfaction with full-time faculty contribution | −0.35 (.01)a |
| Satisfaction with cooperation among residents | .08 (.57) |
| Enough personal support from faculty | −.25 (.07) |
| Satisfaction with instruction received | −.27 (.05) |
Indicates statistical significance.
When the 14 variables that were significantly associated with change in depressive symptoms were entered into a stepwise linear regression, timely and appropriate faculty feedback (P = .005), mean duty hours per week (P = .002), learning experience in in-patient rotations (P = .04), and research ranking position (P = .03) remained significant and explained 45.7% of the variance in program-level change in depressive symptoms (Table 3).
Table 3.
Program-Level Predictors of Change in Depressive Symptoms, Unadjusted and Adjusted for Individual-Related Factors, From a Multi-Residency Study of Training Environment and Depression, 2012–2015
| Unadjusted | Adjusted | |||||||
|---|---|---|---|---|---|---|---|---|
| Variable | β | t | P | 95% CI for β | β | t | P | 95% CI for β |
| Timely and proper faculty feedbacka | −0.37 | −2.97 | .005 | −0.62, −0.12 | −0.28 | −2.20 | .03 | −.0.53, −0.02 |
| Mean duty hours per week | 0.34 | 3.20 | .002 | 0.13, 0.56 | 0.34 | 3.31 | .002 | 0.13, 0.55 |
| Learning experience in inpatient rotationsb | −0.28 | −2.15 | .04 | −0.54, −0.02 | −0.35 | −2.72 | .009 | −0.61, −0.09 |
| Research ranking positionc | −0.25 | −2.24 | .03 | −0.47, −0.03 | −0.22 | −0.21 | .04 | −0.44, −0.01 |
Resident questionnaire item “I get timely and appropriate feedback from faculty.”
Resident questionnaire item “The in-patient ward rotations are generally a good learning experience.”
Position on Doximity research output; lower numbers indicate a higher position.
Similarly, multivariable analysis adjusted for individual-level variables previously associated with depression during internship (i.e., female gender, self-reported history of depression, childhood stress, and neuroticism)1 confirmed faculty feedback (P = .03), mean duty hours per week (P = .002), learning experiences during in-patient rotations (P = .009), and research ranking position (P = .04) as significant program-level predictors of change in depressive symptoms (Table 3).
Predictors of change in depressive symptoms across cohorts
To accurately estimate the effect size of program-level predictors of change in depressive symptoms, we used the 2012–2013 cohorts as a training set to identify significant predictors of mean change in depressive symptoms within residency programs, and the 2014–2015 cohorts as a test set to estimate the effect size in an independent dataset.
The regression model for the 2012–2013 training identified three significant variables (faculty feedback, β = −0.34, P = .011; rotation value, β = −0.42, P = .002; and mean work hours, β = 0.38, P = .001). When this model was applied to the prediction of change in depressive symptoms in the 2014–2015 test set, 20.2% of the variance in program-level change in depressive symptoms was explained (Table 4).
Table 4.
Predictors of Program-Level Change in Depressive Symptoms Across Cohorts, From a Multi-Residency Study of Training Environment and Depression, 2012–2015
| Variable | β | t | P | 95% CI for β |
|---|---|---|---|---|
| Timely and proper faculty feedbacka | −0.34 | −2.63 | .011 | −0.60, −0.08 |
| Mean duty hours per week | 0.29 | 2.23 | .030 | 0.03, 0.55 |
Resident questionnaire item “I get timely and appropriate feedback from faculty.”
Discussion
This study systematically assessed a large set of internal medicine residency programs to identify program-level factors that are associated with resident depression. We found that rates of depression between internal medicine residency programs vary widely. Importantly, we also found that the rate of depression among interns within programs is relatively consistent across independent cohorts of interns, providing additional evidence that programs play an important role in the development of resident depression.
A lack of timely and appropriate faculty feedback, a negative learning experience during in-patient rotations, increased work hours, and higher institutional research rankings were associated with a greater increase in depressive symptoms during the internship year. Importantly, these findings suggest that the residency program environment plays a central role in the mental health of medical interns. These program-level factors can inform changes to residency programs that may reduce the risk of depression in resident physicians.
The finding that program-level duty hours were associated with resident depressive symptoms complements the findings of prior studies linking individual-level duty hours and resident depression.1,3,12 Residents’ ratings of whether they received timely and appropriate feedback from the faculty was actually the strongest predictor of program-level changes in depressive symptoms. Previous studies have shown the association between appropriate faculty feedback and better performance,30 education,31,32 and lower levels of burnout33 in medical residents. Our findings also suggest that timely and appropriate faculty feedback may help to reduce resident depression. Interventions to promote better faculty feedback are challenging, since despite all of the recognized relevance of faculty feedback in medical education,30,31,34 previous studies have shown different barriers to proper faculty feedback,35–37 including the fact that trainees and faculty may have different perceptions about the timing, content, and appropriateness of the feedback given and received.38,39 Further studies should investigate the specific characteristics of faculty feedback that are associated with better mental health in resident physicians, so that residency programs can invest in systematic interventions to promote proper faculty feedback.
Poor learning experience during in-patient rotations was also predictive of a greater increase in depressive symptoms from baseline to quarterly assessments. Previous studies have discussed the impact of different models of rotations on resident competency and patient safety.40,41 Given our findings, further examinations could also focus on identifying specific aspects of in-patient rotations associated with better resident mental health and satisfaction with learning experience. Additional studies exploring how to improve teaching quality during in-patient rotations and its impacts on resident depression are also needed.
Higher institutional research ranking position was associated with a greater increase in resident depressive symptoms during internship. There were no associations between baseline characteristics of participants and Doximity research ranking, suggesting that this association was not driven by individuals predisposed to depression selected for high-ranking research programs. It is possible that research-intensive institutions impose pressure to meet a higher level of productivity in research, as well as clinical domains, which can increase the risk for depression. Alternatively, Doximity research rankings may be a proxy for other characteristics of residency programs. For instance, research-intensive institutions may have a culture42,43 that values research productivity at the expense of clinical excellence. Alternatively, the nature and complexity of patients at research-intensive institutions may be different from less research-intensive institutions. Since this was the first study to explore the associations between research output ranking and program-level change in depressive symptoms, more studies are needed to better elucidate the aspects of the identified relationships between these variables.
The present study has several additional limitations. First, the wide range in the number of interns included per program (5 to 101) may have introduced inclusion bias to this study sample. Although there was no significant association between prevalence of depression at baseline and internship year, it is possible that programs with lower response rates had residents with different levels of depressive symptoms than the ones included in our analysis.
Second, all our assessments were conducted during the internship year. Therefore, such findings may not be generalizable to later years of residency training. In addition, since we only included internal medicine programs, our findings may not be generalized to other specialties.
Third, considering that most of the programs included in this study were large university-based institutions, generalization of our findings to smaller community-based programs should be made with caution.
Fourth, the self-reported nature of depression, duty hours, workload, and learning environment assessments also constitute an additional limitation of our study. While the validity and reliability of PHQ-9 is strong,24,25 it is important to highlight that its results do not constitute a definitive diagnosis. With regard to program-level duty hours, although previous studies have shown that self-reported work hours match well with electronic recordings,44 bias in the number of reported hours worked could be present in our data. In addition, even though Doximity currently enrolls more than 70% of physicians in the United States,28 the use of its research output data without further evidence of its validity and reliability requires caution in interpreting our findings related to programs’ research rankings.
Fifth, definitive conclusions about causal relationships cannot be drawn from observational studies. For instance, it is possible that the association between program-level depression and rating of program learning environment could be driven by depressive residents reporting a lower satisfaction with the characteristics of their program. However, a number of analyses from our study strongly suggest that at least part of the identified associations between specific program factors and resident depression is due to residency factors. First, there is a large variation in the magnitude of the increase in depressive symptoms between programs. Further, the level of depressive symptoms is relatively stable for a given program across independent cohorts of residents, indicating that the large variation in depressive symptoms between programs can be attributed to program features, rather than the individuals within the program in a given cohort. Finally, individual variables such as history of depression, depressive symptoms, and neuroticism measured at baseline, before subjects are exposed to program environments, did not predict the ratings of residency factors associated with depression (faculty feedback, rotation value, and duty hours). These analyses all suggest an important role of residency factors in contributing to residents’ depression.
In summary, this large prospective longitudinal study found that the level of depressive symptoms varies widely among different internal medicine residency programs, and that a considerable amount of this variance can be partially explained by program-level variables: timely and appropriate feedback from faculty, learning experience during in-patient rotations, duty hours, and program research ranking position. These factors are potentially valuable targets for intervention to improve the wellness and mental health of residents. Future studies could consider a qualitative approach to identify additional variables that distinguish programs with high and low rates of resident depression. In addition, further studies could assess whether specific interventions and changes, targeting the factors identified by this study, reduce the high rates of depression among resident physicians.
Supplementary Material
Acknowledgments:
The authors acknowledge and thank the interns who took part in this study.
Funding/Support: This research was supported by the National Institute of Mental Health with a R01 grant MH101459 (S. Sen) and a K23 grant MH095109 (S. Sen). K. Pereira-Lima is the recipient of a research fellowship abroad from the São Paulo Research Foundation (FAPESP; grant 2016/13410–0). The funding/support sources had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Footnotes
Supplemental digital content for this article is available at [LWW INSERT LINK].
Other disclosures: None reported.
Ethical approval: The Intern Health Study protocol was approved by the University of Michigan Institutional Review Board.
Disclaimers: The opinions, results, and conclusions reported in this article are those of the authors and are independent from the funding source.
Contributor Information
Karina Pereira-Lima, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, Michigan; and a Ph.D. candidate, Department of Neuroscience and Behavior, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, São Paulo, Brazil..
Rahael R. Gupta, University of Michigan Medical School, Ann Arbor, Michigan..
Constance Guille, Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina..
Srijan Sen, Molecular and Behavioral Neuroscience Institute, and associate professor, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, Michigan..
References
- 1.Sen S, Kranzler HR, Krystal JH, et al. A prospective cohort study investigating factors associated with depression during medical internship. Arch Gen Psychiatry. 2010;67:557–565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Mata DA, Ramos MA, Bansal N, et al. Prevalence of depression and depressive symptoms among resident physicians: A systematic review and meta-analysis. JAMA. 2015;314:2373–2383. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.de Oliveira GS Jr., Chang R, Fitzgerald PC, et al. The prevalence of burnout and depression and their association with adherence to safety and practice standards: A survey of United States anesthesiology trainees. Anesth Analg. 2013;117:182–193. [DOI] [PubMed] [Google Scholar]
- 4.Tyssen R, Vaglum P, Gronvold NT, Ekeberg O. Suicidal ideation among medical students and young physicians: A nationwide and prospective study of prevalence and predictors. J Affect Disord. 2001;64:69–79. [DOI] [PubMed] [Google Scholar]
- 5.Center C, Davis M, Detre T, et al. Confronting depression and suicide in physicians: A consensus statement. JAMA. 2003;289:3161–3166. [DOI] [PubMed] [Google Scholar]
- 6.West CP, Tan AD, Shanafelt TD. Association of resident fatigue and distress with occupational blood and body fluid exposures and motor vehicle incidents. Mayo Clinic proceedings. 2012;87:1138–1144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.West CP, Tan AD, Habermann TM, Sloan JA, Shanafelt TD. Association of resident fatigue and distress with perceived medical errors. JAMA. 2009;302:1294–1300. [DOI] [PubMed] [Google Scholar]
- 8.Fahrenkopf AM, Sectish TC, Barger LK, et al. Rates of medication errors among depressed and burnt out residents: Prospective cohort study. BMJ. 2008;336:488–491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Bellini LM, Baime M, Shea JA. Variation of mood and empathy during internship. JAMA. 2002;287:3143–3146. [DOI] [PubMed] [Google Scholar]
- 10.Guille C, Clark S, Amstadter AB, Sen S. Trajectories of depressive symptoms in response to prolonged stress in medical interns. Acta Psychiatr Scand. 2014;129:109–115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Clark DC, Salazar-Grueso E, Grabler P, Fawcett J. Predictors of depression during the first 6 months of internship. Am J Psychiatry. 1984;141:1095–1098. [DOI] [PubMed] [Google Scholar]
- 12.Fried EI, Nesse RM, Guille C, Sen S. The differential influence of life stress on individual symptoms of depression. Acta Psychiatr Scand. 2015;131:465–471. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Grant F, Guille C, Sen S. Well-being and the risk of depression under stress. PLoS One. 2013;8:e67395. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Nasca TJ, Weiss KB, Bagian JP. Improving clinical learning environments for tomorrow’s physicians. N Engl J Med. 2014;370:991–993. [DOI] [PubMed] [Google Scholar]
- 15.Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: The foundation of graduate medical education. JAMA. 2013;309:1687–1688. [DOI] [PubMed] [Google Scholar]
- 16.Jennings ML, Slavin SJ. Resident wellness matters: Optimizing resident education and wellness through the learning environment. Acad Med. 2015;90:1246–1250. [DOI] [PubMed] [Google Scholar]
- 17.Buddeberg-Fischer B, Klaghofer R, Stamm M, Siegrist J, Buddeberg C. Work stress and reduced health in young physicians: Prospective evidence from Swiss residents. Int Arch Occup Environ Health. 2008;82:31–38. [DOI] [PubMed] [Google Scholar]
- 18.Sakata Y, Wada K, Tsutsumi A, et al. Effort-reward imbalance and depression in Japanese medical residents. J Occup Health. 2008;50:498–504. [DOI] [PubMed] [Google Scholar]
- 19.Li J, Weigl M, Glaser J, Petru R, Siegrist J, Angerer P. Changes in psychosocial work environment and depressive symptoms: A prospective study in junior physicians. Am J Ind Med. 2013;56:1414–1422. [DOI] [PubMed] [Google Scholar]
- 20.Weigl M, Hornung S, Petru R, Glaser J, Angerer P. Depressive symptoms in junior doctors: A follow-up study on work-related determinants. Int Arch Occup Environ Health. 2012;85:559–570. [DOI] [PubMed] [Google Scholar]
- 21.FREIDA online. American Medical Association Fellowship and Residency Electronic Interactive Database. https://freida.ama-assn.org/Freida/#/. Accessed November 10, 2018.
- 22.Costa PT Jr, McCrae RR. Stability and change in personality assessment: The revised NEO Personality Inventory in the year 2000. J Pers Assess. 1997;68:86–94. [DOI] [PubMed] [Google Scholar]
- 23.Taylor SE, Way BM, Welch WT, Hilmert CJ, Lehman BJ, Eisenberger NI. Early family environment, current adversity, the serotonin transporter promoter polymorphism, and depressive symptomatology. Biol Psychiatry. 2006;60:671–676. [DOI] [PubMed] [Google Scholar]
- 24.Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD: The PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA. 1999;282:1737–1744. [DOI] [PubMed] [Google Scholar]
- 25.Kroenke K, Spitzer RL, Williams JB. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001;16:606–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Seelig CB, DuPre CT, Adelman HM. Development and validation of a scaled questionnaire for evaluation of residency programs. South Med J. 1995;88:745–750. [DOI] [PubMed] [Google Scholar]
- 27.Doximity. Residency Navigator Methodology. https://residency.doximity.com/methodology?_remember_me_attempted=yes. Accessed November 10, 2018.
- 28.Doximity blog. Doximity Reaches Over 70% of U.S. Physicians February 22, 2017. https://blog.doximity.com/articles/we-re-proud-to-serve-70-of-the-nation-s-physicians. Accessed November 10, 2018.
- 29.FREIDA. American Medical Association Fellowship and Residency Electronic Interactive Database. Online specialty training search: internal medicine. 2016. https://freida.ama-assn.org/Freida/user/specStatisticsSearch.do?method=viewDetail&pageNumber=2&spcCd=140. Accessed November 10, 2018.
- 30.Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME Guide No. 7. Med Teach. 2006;28:117–128. [DOI] [PubMed] [Google Scholar]
- 31.Ende J Feedback in clinical medical education. JAMA. 1983;250:777–781. [PubMed] [Google Scholar]
- 32.Minehart RD, Rudolph J, Pian-Smith MC, Raemer DB. Improving faculty feedback to resident trainees during a simulated case: A randomized, controlled trial of an educational intervention. Anesthesiology. 2014;120:160–171. [DOI] [PubMed] [Google Scholar]
- 33.Ripp J, Babyatsky M, Fallar R, et al. The incidence and predictors of job burnout in first-year internal medicine residents: A five-institution study. Acad Med. 2011;86:1304–1310. [DOI] [PubMed] [Google Scholar]
- 34.Simon SR, Sousa PJ, MacBride SE. The importance of feedback training. Acad Med. 1997;72:1–2. [PubMed] [Google Scholar]
- 35.Mitchell JD, Holak EJ, Tran HN, Muret-Wagstaff S, Jones SB, Brzezinski M. Are we closing the gap in faculty development needs for feedback training? J Clin Anesth. 2013;25:560–564. [DOI] [PubMed] [Google Scholar]
- 36.Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA. 2009;302:1330–1331. [DOI] [PubMed] [Google Scholar]
- 37.Mitchell JD, Jones SB. Faculty development in feedback provision. Int Anesthesiol Clin. 2016;54:54–65. [DOI] [PubMed] [Google Scholar]
- 38.Sender Liberman A, Liberman M, Steinert Y, McLeod P, Meterissian S. Surgery residents and attending surgeons have different perceptions of feedback. Med Teach. 2005;27:470–472. [DOI] [PubMed] [Google Scholar]
- 39.Gil DH, Heins M, Jones PB. Perceptions of medical school faculty members and students on clinical clerkship feedback. J Med Educ. 1984;59:856–864. [DOI] [PubMed] [Google Scholar]
- 40.Holmboe E, Ginsburg S, Bernabeo E. The rotational approach to medical education: Time to confront our assumptions? Med Educ. 2011;45:69–80. [DOI] [PubMed] [Google Scholar]
- 41.Napolitano LM, Biester TW, Jurkovich GJ, et al. General surgery resident rotations in surgical critical care, trauma, and burns: What is optimal for residency training? Am J Surg. 2016;212:629–637. [DOI] [PubMed] [Google Scholar]
- 42.Shanafelt TD. Enhancing meaning in work: A prescription for preventing physician burnout and promoting patient-centered care. JAMA. 2009;302:1338–1340. [DOI] [PubMed] [Google Scholar]
- 43.Balch CM, Freischlag JA, Shanafelt TD. Stress and burnout among surgeons: Understanding and managing the syndrome and avoiding the adverse consequences. Arch Surg. 2009;144:371–376. [DOI] [PubMed] [Google Scholar]
- 44.Todd SR, Fahy BN, Paukert JL, Mersinger D, Johnson ML, Bass BL. How accurate are self-reported resident duty hours? J Surg Educ. 2010;67:103–107. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
