Key Points
Question
What is the effect of increased attending physician supervision on a resident inpatient team for both patient safety and educational outcomes?
Findings
In this randomized clinical trial of 22 attending physicians each providing 2 different levels of supervision, increased supervision did not significantly reduce the rate of medical errors but did result in interns speaking less and residents reporting a decreased level of autonomy.
Meaning
Residency training programs should have more flexibility in balancing patient safety, resident autonomy, and learner needs.
This randomized clinical trial investigates the effect of increased attending physician supervision on an inpatient resident general medical service on patient safety and educational outcomes.
Abstract
Importance
While the relationship between resident work hours and patient safety has been extensively studied, little research has evaluated the role of attending physician supervision on patient safety.
Objective
To determine the effect of increased attending physician supervision on an inpatient resident general medical service on patient safety and educational outcomes.
Design, Setting, and Participants
This 9-month randomized clinical trial performed on an inpatient general medical service of a large academic medical center used a crossover design. Participants were clinical teaching attending physicians and residents in an internal medicine residency program.
Interventions
Twenty-two faculty provided either (1) increased direct supervision in which attending physicians joined work rounds on previously admitted patients or (2) standard supervision in which attending physicians were available but did not join work rounds. Each faculty member participated in both arms in random order.
Main Outcomes and Measures
The primary safety outcome was rate of medical errors. Resident education was evaluated via a time-motion study to assess resident participation on rounds and via surveys to measure resident and attending physician educational ratings.
Results
Of the 22 attending physicians, 8 (36%) were women, with 15 (68%) having more than 5 years of experience. A total of 1259 patients (5772 patient-days) were included in the analysis. The medical error rate was not significantly different between standard vs increased supervision (107.6; 95% CI, 85.8-133.7 vs 91.1; 95% CI, 76.9-104.0 per 1000 patient-days; P = .21). Time-motion analysis of 161 work rounds found no difference in mean length of time spent discussing established patients in the 2 models (202; 95% CI, 192-212 vs 202; 95% CI, 189-215 minutes; P = .99). Interns spoke less when an attending physician joined rounds (64; 95% CI, 60-68 vs 55; 95% CI, 49-60 minutes; P = .008). In surveys, interns reported feeling less efficient (41 [55%] vs 68 [73%]; P = .02) and less autonomous (53 [72%] vs 86 [91%]; P = .001) with an attending physician present and residents felt less autonomous (11 [58%] vs 30 [97%]; P < .001). Conversely, attending physicians rated the quality of care higher when they participated on work rounds (20 [100%] vs 16 [80%]; P = .04).
Conclusions and Relevance
Increased direct attending physician supervision did not significantly reduce the medical error rate. In designing morning work rounds, residency programs should reconsider their balance of patient safety, learning needs, and resident autonomy.
Trial Registration
ClinicalTrials.gov Identifier: NCT03318198
Introduction
Graduate physician training is based on the concept of progressive independence. As trainees gain experience, they are provided with decreasing levels of clinical supervision; the goal is resident competence to practice independently.1,2 During training, supervision is critical in ensuring patient safety, yet adult learning theory highlights that learning occurs when trainees are challenged to work beyond their comfort level and there is appropriate space between the teacher and trainee.3 Supervision is therefore a complex activity requiring clinical educators to continuously balance their degree of involvement.4 Beginning with the Bell Commission and the Institute of Medicine’s 2008 report on resident duty hours, there have been increased calls for enhancing supervision because of patient safety concerns.5,6(pp125-158) The evidence for increased supervision, however, is not robust.4,7,8 Two meta-analyses on supervision found studies limited by lack of objective measures and nonrandomized designs.8,9 Two intensive care unit (ICU) studies on 24 hours per day, 7 days per week supervision did not demonstrate patient safety benefits.10,11 Editorials have raised concerns about oversupervision and its effect on hindering trainee competence and longer-term patient safety.3,12,13
The Accreditation Council for Graduate Medical Education defines direct supervision as the presence of the supervising physician with the resident and patient. Indirect supervision occurs when the supervising physician is immediately available but not physically present.2 The growth of the hospitalist movement has increased faculty presence on the inpatient wards and, in turn, has increased direct supervision on patient rounds.14 On some services, attending physicians commonly join both new patient rounds and work rounds on previously admitted patients, which used to be the domain of residents alone.15,16 However, it is unclear what effect this increased direct clinical supervision on work rounds has on patient safety and to what extent it affects progressive trainee independence.7,12
In response to these concerns, we conducted a randomized, crossover clinical trial of 2 levels of supervision on an inpatient general medical teaching service to evaluate patient safety and educational outcomes. We hypothesized that increased direct supervision of resident work rounds would improve patient safety and education.
Methods
Study Design
This study was conducted on the general medical teaching service at Massachusetts General Hospital (MGH), an 1100-bed academic medical center in Boston, Massachusetts, with 188 internal medicine residents. The study was completed over 9 months, from September 30, 2015, to June 22, 2016, avoiding the summer months when new residents begin their residency.17 This study was approved by the MGH Institutional Review Board. The protocol is available in Supplement 1.
Participants
We selected and consented participants from a pool of attending physicians who regularly supervise residents on our medical inpatient service. To ensure consistency and expertise, eligible faculty participants were chosen based on superior ratings by residents for outstanding teaching skills and a career focus in resident education. As such, they were not intended to broadly represent all attending physicians, but rather those well versed in providing supervision.
Intervention
The intervention and control periods were 2 weeks, designed to coincide with the length of an attending physician inpatient rotation. Residents could be on service 2 or 4 weeks and might straddle intervention and control periods, but this was rare. The control arm was standard supervision at MGH, consisting of bedside presentations of newly admitted patients to the attending physician from 8:00 am to 10:00 am. Attending physicians do not join resident work rounds on established patients; instead, they “card flip” and discuss treatment plans for established patients with the supervising resident in the early afternoon. In the intervention arm, attending physicians joined both new patient presentations and resident work rounds 7 days a week, providing direct supervision during work rounds. Attending physician availability and responsibility during the afternoons and evenings was the same in both arms.
Each attending physician participated in 1 2-week block with standard supervision (control) and 1 2-week block with increased direct supervision on work rounds (intervention), with the order of blocks randomly assigned per attending physician at the start of the study. The participating attending physicians received a 1-hour training session on increased direct supervision with a discussion on expectations for joining work rounds.
Patients
Patients were assigned to teams by the admitting office based on bed availability. Only patients with study faculty listed as the attending physician of record were included in the study analysis and all faculty’s patients were on resident teams. If the patient was admitted before the attending physician started the rotation or was discharged after the attending finished, only the days the patient was listed under the study attending were evaluated for medical errors.
Outcomes
Medical Errors
The primary patient safety outcome was medical errors, defined as preventable failures in the process of care, consisting of preventable adverse events and near misses. An adverse event was defined as medical care that led to patient harm; harm was broadly defined as any measured physiologic disturbance due to medical care. A near miss was a failure in a process of care that did not result in patient harm. Using a previously validated approach to collecting and assessing these outcomes, 5 research nurses, blinded to study arm, reviewed the medical records of all study patients, formal incident reports from the hospital incident-reporting system, daily pharmacy reports, and pharmacy paging logs and solicited reports from nurses working on the study units.18,19,20,21 Four physician investigators, blinded to study arm, classified each incident as an adverse event, near miss, or exclusion. Physician reviewers further classified all adverse events as preventable or nonpreventable. In a sample of 40 events reviewed by all 4 physicians, the κ statistic was 0.79 (82.5% agreement) for event classification and 0.47 for preventability of adverse events, comparable to other studies.20 Discordant classifications were reconciled by discussion among the 4 reviewers. Examples of medical errors are provided in eTables 1 and 2 in Supplement 2. Severity was rated using the modified National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index.22 Two blinded physician reviewers assessed severity with 100% agreement. For secondary outcomes, hospital administrative data were used to collect information on mortality, ICU transfers, and length of stay.
Educational Outcomes
For our primary education outcome, we conducted time-motion observations on both the control and increased-supervision teams to measure the length of rounds and the total speaking time of faculty, residents, and interns. An independent observer recorded duration of rounds and speaking time using an iPad running Microsoft Access Timing Program and database following a previously described protocol.23
For secondary educational outcomes, research nurses collected daily the number of radiology studies obtained, consultations called, and the number of written orders from 7:00 am to 12:00 pm and 12:01 pm to 5:00 pm on every patient. These times were chosen to reflect a period during rounds and postrounds assuming residents might change their orders in the afternoon following discussion with the attending physician, especially on the control teams. Residents, interns, and attending physicians were given an online survey at the end of each 2-week rotation to assess perceptions of education and teaching, length of rounds, patient care, decision making, autonomy, and satisfaction. Surveys were designed based on established literature in the field.24
Data Analysis
For each patient-day, patients’ group assignment was determined by the status of the responsible attending physician for that day. Therefore, it was possible for patient-days within the same hospitalization to span across both standard-supervision and increased-supervision periods. We compared the patient characteristics between all patients with an attending physician of record during the standard- and increased-supervision periods using 2-sample t tests for continuous variables and χ2 tests for categorical variables. Incidence rate was calculated as number of events (overall errors, preventable adverse events, and near misses) per 1000 patient-days. We used the generalized equations estimation approach to account for clustering of patients within each corresponding attending physician. We used Poisson regression models to compare medical error rates between the 2 groups. As a sensitivity analysis, we compared the medical error rates restricted to patients admitted and discharged while under the care of the same attending physician. For secondary patient outcomes, we used a Poisson regression model to compare hospital length of stay, logistic regression models for ICU transfer, and discharge disposition. We used 2-sample t tests to compare duration of rounds and time spent in each type of activity (attending physician, resident, intern, or patient speaking). We used linear regression models to compare number of radiology studies, consultations, and orders written. All analyses were conducted using SAS, version 9.4 (SAS Institute). Assuming an intraclass correlation coefficient of 0.07 for the patients clustered within the corresponding attending physician, the study was originally designed to detect a difference of 110 errors vs 66 errors per 1000 patient-days between the 2 groups with 80% power and a .05 2-sided significance level based on prior published research.21
Results
Study Patients and Attending Physicians
Twenty-two of 24 eligible invited attending physicians participated in the study over a total of 44 2-week teaching blocks. Of the 22 faculty in the study, 8 (36%) were women. Faculty had a wide range of experience, with 7 (32%) having less than 5 years and 7 (32%) having more than 15 years of experience. Seventy-seven percent of attending physicians’ clinical work was solely inpatient care.
During the study period, attending physicians were assigned a total of 1259 patient hospitalizations (5772 patient-days), 666 standard-supervision and 637 increased-supervision hospitalizations, with 44 patients who spanned across periods. Patient-days attributed to each attending physician ranged from 75 to 232 with similar distribution between the 2 groups, and the median difference of patient-days attributed to the same attending physician was 18 days (interquartile range [IQR], 8-43 days). The distribution of age, sex, race, insurance, and medical complexity did not differ significantly between the 2 groups (Table 1).
Table 1. Patient Characteristics.
Characteristic | Admissions | P Value | |
---|---|---|---|
Control (n = 666) |
Intervention (n = 637) |
||
Age, mean (SD), y | 61.1 (19.0) | 60.9 (18.7) | .89 |
Female sex, No. (%) | 301 (45.2) | 283 (44.4) | .78 |
Race, No. (%) | |||
White non-Hispanic | 510 (76.6) | 490 (76.9) | >.99 |
Black non-Hispanic | 61 (9.2) | 59 (9.3) | |
Hispanic | 7 (1.1) | 7 (1.1) | |
Unknown/other | 88 (13.2) | 81 (12.7) | |
Charlson comorbidity index, mean (SD) | 2.0 (2.4) | 2.3 (2.9) | .06 |
Insurance, No. (%) | |||
Medicare | 330 (49.5) | 313 (49.1) | .71 |
Medicaid | 207 (31.1) | 211 (33.1) | |
Private | 123 (18.5) | 110 (17.3) |
Medical Errors and Adverse Events
The overall medical error rate was 107.6 per 1000 patient-days in the standard-supervision group vs 91.1 per 1000 patient-days in the increased-supervision group (15% relative reduction; 95% CI, −36% to 9%; P = .21) (Table 2). There was no statistically significant difference in preventable adverse events (80.0 vs 70.9 events per 1000 patient-days; P = .36) or rate of near misses (27.6 vs 20.2 per 1000 patient-days; P = .21). In a subgroup analysis restricted to patients admitted and discharged under the care of the same attending physician, results were similar. In a subgroup analysis comparing attending physicians by years of experience, there was no difference. In categorizing severity of harm of adverse events using MCC MERP severity categories E through H, 216 (88.5%) standard-supervision events and 171 (88.6%) increased-supervision events were only minor harm (P = .50).
Table 2. Incidence of Medical Errors and Preventable Adverse Events.
Event Type | Control | Intervention | Rate Ratio (95% CI) | P Value |
---|---|---|---|---|
No. (rate per 1000 patient-days) | (n = 3049 patient-days) | (n = 2723 patient-days) | ||
Overall medical errors | 328 (107.6) | 248 (91.1) | 0.85 (0.64-1.09) | .21 |
Preventable adverse events | 244 (80.0) | 193 (70.9) | 0.89 (0.67-1.15) | .36 |
Near misses | 84 (27.6) | 55 (20.2) | 0.73 (0.44-1.18) | .21 |
Medical severity,a No. (%) | (n = 244 events) | (n = 193 events) | ||
E | 216 (88.5) | 171 (88.6) | … | .50 |
F | 27 (11.1) | 21 (10.9) | … | |
G | 0 | 1 (0.5) | … | |
H | 1 (0.4) | 0 | … |
Abbreviation: Ellipses indicate not calculated.
Level E indicates temporary harm to patient requiring intervention; level F, temporary harm required prolonged hospitalization; level G, permanent harm; and level H, intervention to sustain life.
Secondary Patient Outcomes
There was no significant difference in standard vs increased supervision in length of stay (median, 6.0 [IQR, 4.0-11.0] vs 6.0 [IQR, 3.0-11.0] days; P = .93), transfers to the ICU (88 [13.2%] vs 101 [15.9%]; P = .22), deaths (17 [2.6%] vs 17 [2.7%]; P = .84), or discharge disposition (281 [42.2%] vs 270 [42.4%] discharged home; P = .85) (Table 3).
Table 3. Patient Outcomes.
Outcome | Hospitalizations | P Value | |
---|---|---|---|
Control (n = 666) |
Intervention (n = 637) |
||
Length of stay, median (IQR), d | 6.0 (4.0-11.0) | 6.0 (3.0-11.0) | .93 |
Deaths, No. (%) | 17 (2.6) | 17 (2.7) | .84 |
Transfer to intensive care unit, No. (%) | 88 (13.2) | 101 (15.9) | .22 |
Discharge disposition, No. (%) | |||
Home | 281 (42.2) | 270 (42.4) | .85 |
Home with services | 149 (22.4) | 140 (22.0) | |
Facility | 197 (29.6) | 189 (29.7) | |
Against medical advice | 18 (2.7) | 14 (2.2) |
Educational Outcomes
Mean total duration of work rounds did not change between standard vs increased supervision (202 [95% CI, 192-212] vs 202 [95% CI, 189-215] minutes; P = .99). New-admission bedside presentations were also the same mean duration (105 [95% CI, 94-116] vs 106 [95% CI, 94-119] minutes; P = .87). On work rounds, junior residents spoke similar mean lengths of time during standard vs increased supervision (58 [95% CI, 54-63] vs 57 [95% CI, 52-61] minutes; P = .58). However, the mean amount of time interns spoke was longer in the standard arm (64 [95% CI, 60-68] vs 55 [95% CI, 49-60] minutes; P = .008). Patients and families spoke the same mean amount of time in work rounds regardless of level of supervision (13 [95% CI, 12-14] vs 13 [95% CI, 11-14] minutes; P = .80) (Table 4).
Table 4. Educational Outcomes (Time Motion Results and Orders).
Outcome | Control | Intervention | P Value |
---|---|---|---|
Work rounds, mean (95% CI), min | (n = 88) | (n = 73) | |
Duration | 202 (192-212) | 202 (189-215) | .99 |
Speaking time | |||
Resident | 58 (54-63) | 57 (52-61) | .58 |
Intern | 64 (60-68) | 55 (49-60) | .008 |
Patient | 13 (12-14) | 13 (11-14) | .80 |
Attending physician | 2 (1-3) | 13 (11-15) | .02 |
Patient rounds, mean (95% CI), min | (n = 82) | (n = 62) | |
Duration | 105 (94-116) | 106 (94-119) | .87 |
Orders, mean (95% CI), patient-days | (n = 3049 patient-days) | (n = 2723 patient-days) | |
Radiology studies | 0.39 (0.36-0.43) | 0.41 (0.38-0.44) | .75 |
Consultations | 0.78 (0.75-0.82) | 0.87 (0.83-0.91) | .28 |
Orders written 7:00 am to 12:00 pm | 4.41 (4.19-4.63) | 5.35 (5.08-5.61) | .10 |
Orders written 12:01 pm to 5:00 pm | 3.98 (3.77-4.18) | 5.13 (4.83-5.44) | .09 |
Residents and interns ordered the same daily mean number of radiology studies (0.39 [95% CI, 0.36-0.43] vs 0.41 [95% CI, 0.38-0.44] studies per patient-day; P = .75) and similar number of consultations (0.78 [95% CI, 0.75-0.82] vs 0.87 [95% CI, 0.83-0.91] per patient-day; P = .28) on their patients regardless of study arm. Trainees placed slightly more orders on the intervention teams than on the control teams both between the hours of 7:00 am and 12:00 pm (4.41 [95% CI, 4.19-4.63] vs 5.35 [95% CI, 5.08-5.61] orders per patient-day; P = .10) and between 12:01 pm and 5:00 pm (3.98 [95% CI, 3.77-4.18] vs 5.13 [95% CI, 4.83-5.44] orders per patient-day; P = .09), but the differences were not significant (Table 4).
In surveys, residents and interns reported that when an attending physician joined work rounds they were less efficient, felt less autonomous, and had less ability to make independent decisions. Without the attending physician on work rounds, residents believed that they were the team’s leader and their comfort in making independent patient care decisions improved. Similarly, in the control arm interns believed that they received more feedback on their decision making and supervision was “just right.” Residents in both the control and intervention arms believed that they provided the same quality of care and rated the learning environments similarly. Conversely, attending physicians believed that they knew the team’s plan of care better, rated the quality of care higher, and felt more satisfied with the care provided when they participated on work rounds. Attending physicians believed that the educational experience was the same in both arms (Table 5).
Table 5. Resident, Intern, and Attending Physician Survey Responsesa.
Top 2 Boxes Responseb | No. (%) | P Value | |
---|---|---|---|
Control | Intervention | ||
Residents—work rounds | n = 31 | n = 19 | |
Work rounds were extremely efficient | 20 (65) | 7 (37) | .06 |
Level of autonomy was just rightc | 30 (97) | 11 (58) | <.001 |
I felt I was leader of the team | 30 (97) | 11 (58) | <.001 |
I had little fear of being judged | 25 (81) | 13 (68) | .33 |
My comfort in making independent patient care decisions improved | 31 (100) | 12 (63) | <.001 |
Rounds were an educational experience | 24 (77) | 11 (58) | .14 |
We provided outstanding quality of care | 29 (94) | 16 (84) | .29 |
Interns—work rounds | n = 95 | n = 74 | |
Work rounds were extremely efficient | 69 (73) | 41 (55) | .02 |
Level of autonomy was just rightc | 86 (91) | 53 (72) | .001 |
Level of supervision was just right | 87 (94) | 53 (72) | <.001 |
I received feedback that improved my decision making | 70 (75) | 38 (51) | .001 |
I had little fear in being judged | 80 (84) | 58 (78) | .33 |
Rounds were an educational experience | 75 (79) | 52 (70) | .20 |
We provided outstanding quality of care | 90 (95) | 68 (92) | .46 |
Attending physicians | n = 20 | n = 20 | |
I knew my team’s plan of care | 12 (60) | 20 (100) | .002 |
Rounds were an educational experience | 16 (80) | 16 (80) | .99 |
We provided outstanding quality of care | 16 (80) | 20 (100) | .04 |
Work-life balance was poor | 4 (20) | 9 (45) | .09 |
Survey response for attending physicians was 20 of 22 (91%) for both control and intervention. Survey response for junior residents was 31 of 31 (100%) on the control teams and 19 of 24 (79%) on the intervention teams. Survey response for interns was 95 of 105 (90%) on the control teams and 74 of 85 (87%) on the intervention teams. The reason the numbers of resident and intern participants on the control and intervention teams were not the same was due to variation in team size (occasionally there were 2 junior residents or 1 fewer intern) and several times 2 intervention attending physicians served together on the same team.
For survey data, we compared the percentage of respondents choosing the top 2 boxes (“strongly agree” or “agree”) of a 5-point scale from intern, resident, and attending physicians’ surveys using χ2 tests.
Level of autonomy “just right” was determined by a single box—middle option of a 5-point scale.
Discussion
We found that increasing the level of supervision during resident work rounds did not produce a statistically significant reduction in medical errors. While there was a 15.3% reduction in errors in the intervention arm compared with the control arm, this reduction is far less than the 23% to 46% reductions seen in the intervention arm of other safety studies using a similar methodology.20,21,25 Furthermore, analysis of the types of errors detected in our study suggests that this reduction is not clinically meaningful because 88.5% of our detected errors were level E (minor) and corollary patient safety measures, including length of stay, ICU transfers, and mortality, were similar. Our target of 40% reduction in errors for the power calculation, while a relatively large effect size, was chosen to ensure detection of a clinically significant reduction in errors, because it was anticipated that most errors detected would be minor.
The current literature on supervision and patient safety consists of studies with variable outcomes.9,14,24,26,27,28,29,30 A retrospective cohort study of nearly 40 000 surgical cases of direct supervision in the operating room compared with surgeons simply being available found no difference in mortality.31 Yet a smaller retrospective medical record review of 4417 cases found reduced complications and lower mortality rates when attending physicians were present or scrubbed in the operating room.32
It is not clear whether a similar level of supervision is required for procedural skills vs cognitive decision making. In the outpatient setting, Gennis and Gennis33 found that preceptors who directly saw patients judged them to be more severely ill than residents did and made major diagnostic changes in 5.5% of the patients, but there were no patient safety outcomes studied. Patient safety outcomes were evaluated in ICU studies after calls for 24 hours a day, 7 days a week intensivist coverage in ICUs due to higher death rates on weekends and at night.34 Kerlin et al10 randomized ICU staffing to in-house call (direct) vs telephone calls and found no difference in length of stay or ICU mortality. A retrospective cohort study of 65 000 ICU patients also found no difference in mortality with the addition of overnight intensivists.35 Our study provides further evidence that increased supervision may not increase patient safety.
Published editorials have questioned whether increased supervision is educationally beneficial or reduces autonomy, leading to less competent residents.1,12,36 Some survey studies of residents suggest that increased supervision may improve education,1,14,26,29,30,37,38 although others do not.14,38,39 Our study indicates that increased supervision may have negative consequences for resident education and autonomy. Interns spoke less and residents reported less comfort making independent decisions with an attending on work rounds. Studies on learners note that they worry about exposing their gaps in knowledge in front of attending physicians, especially given that the same attending physicians often also evaluate them.29,40,41,42 Interns may feel more comfortable asking questions to a peer rather than an attending.1,41 Educational theory supports the idea of peer collaborative learning in which trainees learn from each other rather than teachers.3 There are certainly educational reasons for attending physicians to join work rounds, including observation and feedback at the bedside; however, our work suggests that multiple factors need to be weighed when deciding when an attending physician should be present on work rounds, including patient safety, peer-to-peer education, and resident autonomy.
Limitations
This was a single-center study at a large academic residency program with a culture that emphasizes resident autonomy and as such may have limited generalizability. However, all residency programs struggle with the balance between resident autonomy and supervision. It is possible that faculty not skilled in creating a collaborative teaching environment could limit intern speaking time, a phenomenon that additional faculty training might mitigate. The inability to mask level of supervision could affect resident behavior and cause them to increase their level of vigilance; however, given the duration of the study and intensity of workload, we believe that it would be difficult for participants to change their behavior in a sustained enough manner to bias the results. We defined harm broadly, as any physiologic change, which may have contributed to the difference in our proportion of preventable adverse events and near misses compared with other studies. However, we would not expect this to affect differences in rates of errors between the intervention and control arms of the study. Last, while we did observe a slightly reduced rate of errors in the increased-supervision arm, this was not statistically significant. The methodology to detect errors does not have perfect sensitivity and could have missed relevant medical errors and thus reduced our overall power to detect a statistically significant reduction. Arguing against this, our study detected nearly twice as many errors as a key study measuring harm using similar methodology.43 We cannot rule out the possibility that a much larger study would demonstrate a difference in rate of medical errors, but at most, any difference would be modest.
Conclusions
Attending physician participation on work rounds was not associated with an improvement in the rate of medical errors, adding to the body of literature that suggests that increased attending supervision does not necessarily improve patient safety. Conversely, our data suggest that a larger attending physician presence may have negative consequences for resident education because interns spoke less and residents felt less empowered to make independent medical decisions. In contrast, attending physicians rated the quality of care higher when they participated on work rounds, which may be why more attending physicians are joining resident work rounds. Given the importance of graduated autonomy to adult learning and the value of peer learning, the decisions about level of supervision should consider the need for distance between teacher and student for learning to occur. The results of this study suggest that residency training programs reconsider the appropriate level of attending physician supervision in designing their morning rounds, balancing patient safety, excellent care, learner needs, and resident autonomy.
References
- 1.Kennedy TJ, Regehr G, Baker GR, Lingard LA. Progressive independence in clinical training: a tradition worth defending? Acad Med. 2005;80(10)(suppl):S106-S111. [DOI] [PubMed] [Google Scholar]
- 2.Nasca TJ, Day SH, Amis ES Jr; ACGME Duty Hour Task Force . The new recommendations on duty hours from the ACGME Task Force. N Engl J Med. 2010;363(2):e3. [DOI] [PubMed] [Google Scholar]
- 3.Kennedy TJ. Towards a tighter link between supervision and trainee ability. Med Educ. 2009;43(12):1126-1128. [DOI] [PubMed] [Google Scholar]
- 4.Cottrell D, Kilminster S, Jolly B, Grant J. What is effective supervision and how does it happen? a critical incident study. Med Educ. 2002;36(11):1042-1049. [DOI] [PubMed] [Google Scholar]
- 5.Bell BM. Supervision, not regulation of hours, is the key to improving the quality of patient care. JAMA. 1993;269(3):403-404. [PubMed] [Google Scholar]
- 6.Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: Institute of Medicine; 2008. [Google Scholar]
- 7.Kennedy TJ, Lingard L, Baker GR, Kitchen L, Regehr G. Clinical oversight: conceptualizing the relationship between supervision and safety. J Gen Intern Med. 2007;22(8):1080-1085. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Kilminster SM, Jolly BC. Effective supervision in clinical practice settings: a literature review. Med Educ. 2000;34(10):827-840. [DOI] [PubMed] [Google Scholar]
- 9.Farnan JM, Petty LA, Georgitis E, et al. . A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87(4):428-442. [DOI] [PubMed] [Google Scholar]
- 10.Kerlin MP, Small DS, Cooney E, et al. . A randomized trial of nighttime physician staffing in an intensive care unit. N Engl J Med. 2013;368(23):2201-2209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Reriani M, Biehl M, Sloan JA, Malinchoc M, Gajic O. Effect of 24-hour mandatory vs on-demand critical care specialist presence on long-term survival and quality of life of critically ill patients in the intensive care unit of a teaching hospital. J Crit Care. 2012;27(4):421.e1-421.e7. [DOI] [PubMed] [Google Scholar]
- 12.Halpern SD, Detsky AS. Graded autonomy in medical education—managing things that go bump in the night. N Engl J Med. 2014;370(12):1086-1089. [DOI] [PubMed] [Google Scholar]
- 13.Hinchey KT, Rothberg MB. Can residents learn to be good doctors without harming patients? J Gen Intern Med. 2010;25(8):760-761. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Landrigan CP, Muret-Wagstaff S, Chiang VW, Nigrin DJ, Goldmann DA, Finkelstein JA. Effect of a pediatric hospitalist system on housestaff education and experience. Arch Pediatr Adolesc Med. 2002;156(9):877-883. [DOI] [PubMed] [Google Scholar]
- 15.Saint S, Fowler KE, Krein SL, et al. . An academic hospitalist model to improve healthcare worker communication and learner education: results from a quasi-experimental study at a Veterans Affairs medical center. J Hosp Med. 2013;8(12):702-710. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hauer KE, Irby DM. Effective clinical teaching in the inpatient setting In: Wachter R, Goldman L, Hollander H, eds. Hospital Medicine. 2nd ed Philadelphia, PA: Lippincott Williams & Wilkins; 2005:71-78. [Google Scholar]
- 17.Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309-315. [DOI] [PubMed] [Google Scholar]
- 18.Bates DW, Cullen DJ, Laird N, et al. ; ADE Prevention Study Group . Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34. [PubMed] [Google Scholar]
- 19.Kaushal R. Using chart review to screen for medication errors and adverse drug events. Am J Health Syst Pharm. 2002;59(23):2323-2325. [DOI] [PubMed] [Google Scholar]
- 20.Starmer AJ, Spector ND, Srivastava R, et al. ; I-PASS Study Group . Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812. [DOI] [PubMed] [Google Scholar]
- 21.Starmer AJ, Sectish TC, Simon DW, et al. . Rates of medical errors and preventable adverse events among hospitalized children following implementation of a resident handoff bundle. JAMA. 2013;310(21):2262-2270. [DOI] [PubMed] [Google Scholar]
- 22.National Coordinating Council for Medication Error Reporting and Prevention website. http://www.nccmerp.org/types-medication-errors. Accessed June 9, 2017.
- 23.Huang KT, Minahan J, Brita-Rossi P, et al. . All together now: impact of a regionalization and bedside rounding initiative on the efficiency and inclusiveness of clinical rounds. J Hosp Med. 2017;12(3):150-156. [DOI] [PubMed] [Google Scholar]
- 24.Biondi EA, Varade WS, Garfunkel LC, et al. . Discordance between resident and faculty perceptions of resident autonomy: can self-determination theory help interpret differences and guide strategies for bridging the divide? Acad Med. 2015;90(4):462-471. [DOI] [PubMed] [Google Scholar]
- 25.Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care. 2003;12(suppl 2):ii39-ii45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Baldwin DC Jr, Daugherty SR, Ryan PM. How residents view their clinical supervision: a reanalysis of classic national survey data. J Grad Med Educ. 2010;2(1):37-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Defilippis AP, Tellez I, Winawer N, Di Francesco L, Manning KD, Kripalani S. On-site night float by attending physicians: a model to improve resident education and patient care. J Grad Med Educ. 2010;2(1):57-61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Farnan JM, Burger A, Boonyasai RT, et al. ; SGIM Housestaff Oversight Subcommittee . Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7(7):521-523. [DOI] [PubMed] [Google Scholar]
- 29.Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision-making, and autonomy. J Hosp Med. 2012;7(8):606-610. [DOI] [PubMed] [Google Scholar]
- 30.Phy MP, Offord KP, Manning DM, Bundrick JB, Huddleston JM. Increased faculty presence on inpatient teaching services. Mayo Clin Proc. 2004;79(3):332-336. [DOI] [PubMed] [Google Scholar]
- 31.Itani KM, DePalma RG, Schifftner T, et al. . Surgical resident supervision in the operating room and outcomes of care in Veterans Affairs hospitals. Am J Surg. 2005;190(5):725-731. [DOI] [PubMed] [Google Scholar]
- 32.Fallon WF Jr, Wears RL, Tepas JJ III. Resident supervision in the operating room: does this impact on outcome? J Trauma. 1993;35(4):556-560. [DOI] [PubMed] [Google Scholar]
- 33.Gennis VM, Gennis MA. Supervision in the outpatient clinic: effects on teaching and patient care. J Gen Intern Med. 1993;8(7):378-380. [DOI] [PubMed] [Google Scholar]
- 34.Burnham EL, Moss M, Geraci MW. The case for 24/7 in-house intensivist coverage. Am J Respir Crit Care Med. 2010;181(11):1159-1160. [DOI] [PubMed] [Google Scholar]
- 35.Wallace DJ, Angus DC, Barnato AE, Kramer AA, Kahn JM. Nighttime intensivist staffing and mortality among critically ill patients. N Engl J Med. 2012;366(22):2093-2101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Kerlin MP, Halpern SD. Twenty-four-hour intensivist staffing in teaching hospitals: tensions between safety today and safety tomorrow. Chest. 2012;141(5):1315-1320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Busari JO, Weggelaar NM, Knottnerus AC, Greidanus PM, Scherpbier AJ. How medical residents perceive the quality of supervision provided by attending doctors in the clinical setting. Med Educ. 2005;39(7):696-703. [DOI] [PubMed] [Google Scholar]
- 38.Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. On-call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784-788. [DOI] [PubMed] [Google Scholar]
- 39.Stevermer JJ, Stiffman MN. The effect of the teaching physician rule on residency education. Fam Med. 2001;33(2):104-110. [PubMed] [Google Scholar]
- 40.Farnan JM, Humphrey HJ, Arora V. Supervision: a 2-way street. Arch Intern Med. 2008;168(10):1117. [DOI] [PubMed] [Google Scholar]
- 41.Farnan JM, Johnson JK, Meltzer DO, Humphrey HJ, Arora VM. Resident uncertainty in clinical decision making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122-126. [DOI] [PubMed] [Google Scholar]
- 42.Kennedy TJ, Regehr G, Baker GR, Lingard L. Preserving professional credibility: grounded theory study of medical trainees’ requests for clinical support. BMJ. 2009;338:b128. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363(22):2124-2134. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.