Abstract
Background: In 2001, graduate medical education in the United States was renovated to better complement 21st century developments in American medicine, society, and culture. As in 1910, when Abraham Flexner was charged to address a relatively non-standardized system that lacked accountability and threatened credibility of the profession, Dr. David Leach led the Accreditation Council of Graduate Medical Education (ACGME) Outcome Project in a process that has substantially changed medical pedagogy in the United States.
Methods: Brief review of the Flexner Report of 1910 and 6 hours of interviews with leaders of the Outcome Project (4 hours with Dr. David Leach and 1-hour interviews with Drs. Paul Batalden and Susan Swing).
Results: Medical educational leaders and the ACGME concluded in the late 1990s that medical education was not preparing clinicians sufficiently for lifelong learning in the 21st century. A confluence of medical, social, and historic factors required definitions and a common vocabulary for teaching and evaluating medical competency. After a deliberate consensus-driven process, the ACGME and its leaders produced a system requiring greater accountability of learners and teachers, in six explicitly defined domains of medical “competence.” While imperfect, this construct has started to take hold, creating a common vocabulary for longitudinal learning, from undergraduate to post-graduate (residency) education and in the assessment of performance following graduate training.
Keywords: medical education, Outcome Project, ACGME, Flexner
In 1910, Abraham Flexner examined the chaotic state of undergraduate medical education and offered a transformative standardization of admission requirements and educational pedagogy [1]. While inculcation of medical knowledge and patient care were central issues of concern, medical professionalism and public faith had been eroded substantially by inconsistent standards of training. By the late ’90s, medical educators became concerned that the post-graduate apprenticeship system that complemented the Flexner/Hopkins model had become antiquated. Patients and payers demanded greater accountability and more transparent, reproducible metrics of educational achievement to complement needs of the health care system. This manuscript briefly describes Flexner’s revolution in undergraduate medical education and presents firsthand accounts of Drs. David Leach, Susan Swing, and Paul Batalden, who led similarly transformative renovations of graduate medical education (GME) in the new millennium.
Abraham Flexner's First American (Medical) Revolution
The social and medical history leading to and the impact of the Flexner Report is detailed elsewhere [2-5]. In 1904, the American Medical Association created the Council on Medical Education (CME), charged with addressing heterogeneity in American medical schools. While the CME concluded that medical education, formerly highly didactic, should become more experiential, there was no catalog of practices in 155 North American schools. The CME charged the Carnegie Foundation with investigating the state of medical education and distilling best practices to create uniform standards. Abraham Flexner, a scholar and schoolteacher, came to the attention of the Carnegie Foundation as a result of his work, The American College, a critique of undergraduate education. In 1908, he was chosen to lead the project.
But why was such a report needed? Prior to 1910, there were no uniform standards of medical education. Each of the 155 medical schools in North America functioned without external educational guidelines or regulations. Entrance requirements varied. Only 16 of 155 medical schools required 2 years of university training prior to medical school. Some “schools” were simple apprenticeships, others were proprietary (i.e., courses taught by physicians who owned the school), and others were based in universities. Many included excessive didactics and variable experiential learning. Medical education needed greater standardization and oversight to maintain the public’s trust.
Flexner tackled his assignment by visiting all 155 medical schools, summarizing his findings in a full-length book [1]. He found medical education at Johns Hopkins to be the most rigorous, and many of his suggestions [1,3-5] revolved around the Hopkins model, which included 2 years of classroom/laboratory science and 2 years of clinical training followed by an internship and specialty training, if desired. Hopkins “rounds” (so-called because of the shape of the hospital) centered on senior clinicians’ bedside teaching of 3rd- and 4th-year students and post-graduate trainees. (Post-grads were called residents or house-officers because they often lived in the hospital or adjoining complex and might spend hundreds of hours each week learning and caring for patients.) Flexner also concluded that there were too many schools. As a result, half closed between 1910 and 1935, and those remaining were predominantly university-based, allopathic schools. He recommended a minimum of 2 years of college, including biology, chemistry, and physics, as prerequisites for medical school admission.
As the art and science of medicine advanced, so, too, did the duration of training. Longer, specialty-specific GME residency training was required to practice the core principles acquired in medical school and to accumulate greater experience and knowledge in the apprenticeship model. By the mid-1980s, the relatively unregulated apprenticeship model began to buckle. The art and science of medicine required mastery of ever-expanding information, just as medical academicians moved en masse from bedside to laboratory. The doctor-patient relationship had changed fundamentally from one of paternalism (before 1970) to increasingly shared decision-making and respect of patients’ autonomy after revelations at Tuskegee catalyzed the Belmont report of 1979 [6-9]. Patient “empowerment” reflected and was amplified by social movements: civil rights, women’s rights, and gay rights [10]. Patients developed greater hunger for medical information (consider Benjamin Spock’s information revolution as an example) that increased exponentially with web-based resources. When interviewed for this paper, Dr. Paul Batalden summarized: “When I started in 1975, it was not uncommon for patients to say thank you in some way. By the mid-80s, it was much more common for them to ask ‘why?’ in one way or another, seeking further explanation about my recommendations.” And in this social cauldron, where patients increasingly asked for answers and accountability, groups began to form in the late 1980s to examine if/whether the medical system should be more accountable to the citizens who largely underwrite health care. Adverse events in American hospitals captured public attention [11-13] and catalyzed the movement. A nascent community of academicians and other stakeholders began to meet, cogitate, and publish [14], giving rise to the Institute of Medicine and other national medical quality/safety initiatives. By the mid-1990s, the ACGME also embraced this idea of greater accountability ― of more objective measures of taxpayer-driven graduate medical education. Fundamental changes were required.
The Change Agent
David Leach was born in 1943 in Elmira, N.Y. A strong student in high school, he was neither a leader nor star athlete. He matriculated to Saint Michael's College (University of Toronto), where he pondered postgraduate studies in philosophy before concluding that medicine would be more satisfying. He attended medical school at the University of Rochester (1969) and then completed his Internal Medicine residency, chief residency, and fellowship in Endocrinology at the Henry Ford Hospital. In the 1980s, having “eschewed academic medicine” since medical school, he chose a faculty position at the Henry Ford Hospital — “because I could take care of really sick people” ― in the early phases of the AIDS epidemic. He chose to take 6 months of wards medicine, immersing himself in patient care and medical education in the classic Hopkins model. “I really felt like a doctor,” he recalled. Caring for young dying patients was “where I belonged.”
Leach’s Philosophy of Medical Education
Immersed in his work to marry clinical care and education, Dr. Leach contemplated features of ideal medical education and mentorship. He created tutorial sessions for University of Michigan medical students that complemented patient care team rounds and began to conceptualize medical education as a process of character formation: the “proper role of the educator (is) to create an ecology that supports life.” The theme of “organic” education permeates Dr. Leach’s educational philosophy. Doctors, like all humans, resist change, but faced with the inevitability of a rapidly changing medical system — what Leach refers to as “form” — physicians must be trained to embrace change and frequently recalibrate their “mental models.” Medical knowledge advances quickly and systems of care change every few years, so well-trained physicians must learn and practice adaptability. Just as a liberal arts education is conceived of as teaching a young person to think, his idealized medical education provides the learner with a vocabulary and skills that are armaments for the more substantive journey of lifelong learning.
A central tenet of Leach’s philosophy is that “competence is a habit,” where certain behaviors — especially the core behavior of perpetual self-improvement ― become automatic. Seeking competence across the various domains of medical practice is strongly linked to the character of the individual, a willingness to embrace habitual self-improvement as an end-in-itself, and accumulation and mastery of specific competences as the means to satisfy obligations to self and society. Leach and colleagues evoke Aristotle [15] — “We become just by the practice of just actions; self-controlled by exercising self-control” ― and John Q. Wilson’s classic treatise, “Virtue is not learned by precept, however; it is learned by the regular repetition of right actions. We are induced to do the right thing with respect to small matters, and in time we persist in doing the right thing because now we have come to take pleasure in it” [16]. The virtuous physician is one who humbly embraces this journey of character formation, acquiring, sharpening, and applying the tools (i.e., competencies) of service through perpetual practice.
Drs. Leach, Batalden, and Swing credit the work of Dreyfus and Dreyfus, who developed a model to describe pilots’ skill development [17], work that previously had been effectively applied in nursing education [18,19]. Learners advance from novice (pre-meds) to advanced beginner (med school graduates), competent (resident), proficient, and expert. When faced with overwhelming new information, novices often apply rules to begin the sorting process, organizing their own experiential reservoir and problem-solving modus operandi. The goal of medical learning, however, is not accumulation of an ever-expanding rulebook, but rather to acquire knowledge and master its synthesis through “test-driving” — deriving affirmation (in the form of teachers’ approval and/or successful outcomes for patients) when decision-making is “right” and disappointment when decision-making yields a “wrong” result. The emotive content of the transaction ( i.e., to “feel good” in helping patients) anchors character development and permits simple competence to blossom first to proficiency (i.e., doing it well most of the time) and mastery (i.e., doing it extremely well, consistently, and habitually). Leach asserts that negative emotions (i.e., “to feel bad” after making a mistake) can also provide a powerful learning opportunity, especially when teachers deploy constructive, non-accusatory engagement. For example, “we as humans are vulnerable to errors — we all will make mistakes, so don’t beat yourself up ... unless you make the same mistakes over and over.” The complexity of any given patient’s presentation or learning point is not a simple “if A, then B.” Rather, new observations and the associated emotions are contextualized, serially modifying one’s prior understanding of how to best apply knowledge. Dr. Leach emphasizes that context is equally important at “the macro level,” insofar as the clinician makes these decisions in complex organizations that themselves potentiate or impede both divining a right/best course and administering those prescriptions [20,21].
Putting the Philosophy into Educational Practice
In 1997, after concluding that graduate medical apprenticeship would benefit from 21st century refurbishing, the ACGME tapped David Leach to lead renovations of GME. This initiative, which became the Outcome Project, would shift the focus of GME accreditation from educational processes (e.g., lectures, conferences, rounds) to measurable outcomes of programs and learners. Programs would be encouraged to innovate, craft local solutions, and make the system more accountable to the public.
Emulating Flexner’s approach, Dr. Leach contacted many educational leaders, seeking ideas and engaging in “thousands of conversations to understand the organic nature of medical competence.” ACGME staff, led by Susan Swing, reviewed more than 2,500 articles on physician competency and distilled the contents to 84 competencies. Then hundreds of stakeholders, including residency review committee members, experts in adult education, program directors, resident trainees and laypersons, reviewed and rank-ordered both the importance and feasibility of the initial 84 competencies [22]. An advisory panel of educators, initially led by Paul Batalden, then deployed approaches from the educational sciences [23] to further distill the information to six competencies: medical knowledge, patient care, professionalism, communication, practice-based learning, and systems-based practice. Practice-based learning and improvement was the moniker for harnessing new methods and tools of medical information to provide care while learning from one’s experiences and errors. Systems-based practice involves learning and mastering use of systems resources to promote recovery from illness and maintain health. There was nothing new here. All six competencies were certainly taught to varying degrees, some informally, others more explicitly, prior to 2001. However, by naming these six domains and describing sub-domains, including the skills and features inherent in each, educators and learners were provided with a more explicit map of curricular requirements. A clear, unambiguous, shared vocabulary for discussing, investigating, and administering residency medical education was created, though “it would take time for a shared sense of meaning to develop.” The ACGME invited educators to craft and investigate efficacy of new techniques to teach and evaluate learners’ performance in each of the six competencies [22]. While some domains already had well-accepted standardized objective measures (e.g., many specialties issue standardized tests that permit trainees to compare their performance against peers and certifying examinations as a final test of sufficient medical knowledge), there remain to this day very few scientifically tested methods of teaching or evaluating the remaining five competencies. It is also important to note that CanMEDS2000, a similar undertaking of Canadian medical educators, completed its work just prior to the Outcome Project and articulated overlapping, but not identical, competencies (medical expert, communicator, collaborator, manager, health advocate, scholar, and professional) [24].
The Conflation of Duty Hours and New Educational Requirements
ACGME-imposed restrictions of residents’ duty hours coincided with introduction of the Outcome Project. Some data suggested that sleep deprivation-related impairment contributed to harm of both patients and trainees [25]. Politicization of the Libby Zion case and others like it had propelled duty hour restrictions in New York that were later refined and generalized to the entire country via the ACGME’s regulatory function. While the evidence lagged behind the ideology [26,27], in 2001, duty hours were restricted to no more than 80 hours/week, 30 hours/shift, no less than 10 hours of rest between shifts, and 1 day off each week. A second revision in 2011 further reduced interns to no more than 16 consecutive hours. These rules were assailed for a number of reasons. Aside from lacking firm foundation in (patient) outcomes research, the restrictions substantially truncated the longest continuous period that trainees could observe/treat illness. Since many acute illnesses take longer than 16 hours to unfold, the restrictions impacted medical training substantively, as well as logistically. More “hand-offs” were needed and could contribute to errors not measured in published studies, while inculcating practice of medicine as a form of shift work (which collides with obligations to patients). Most important, there was veritably no data on the impact of duty hours on the quality of medical education and its outcomes, i.e., the proficiency and professionalism of graduates. Nonetheless, it would be hypocritical to require inculcation and measurement of professionalism in trainees, while simultaneously requiring them to practice (as novices) on vulnerable patients while impaired [28] and to subject themselves to needless personal risks [26]. Negative reactions to duty hours are sometimes conflated with the Outcome Project because they were implemented simultaneously. In fact, they were independent initiatives, except insofar as both required greater accountability to the ACGME and public.
Results of the Outcome Project
Cons
The ACGME required implementation of the Outcome Project for programs to remain accredited. Resistance was futile, but neither was it an unequivocal victory. Beyond the duty hours morass, David Leach and colleagues had intended implementation of competency-based education to be coupled with scholarship to build evidence-based educational techniques. Despite repeated entreaties by educational leaders [29], very few articles have been published to describe the reliability, validity, and reproducibility of educational methods for teaching or evaluating the competencies.
As a result, objective metrics are lacking more than a decade after launching the Outcome Project. Even measurement of medical knowledge ― the most time-honored competency — remains problematic. Written and, for some specialties, oral/practical examinations of medical knowledge and its application are the final threshold of “Board certification.” But some ― especially cognitive — specialties struggle with objective tools of measuring medical knowledge during training. For example, the American College of Physicians explicitly prohibits use of its yearly Internal Medicine in-training examinations for any purpose other than self-improvement of trainees and programs [30], so a pervasive, and arguably the most valid, criticism is that the Outcome Project requires “measurable deliverables,” i.e., that educators should be accountable for their graduates’ performance. Yet no data exist that the final six competencies (or, more generally, the competency-based construct) are any better than six others that could have been chosen from the original 84. The Outcome Project was not “evidence-based,” and there is no objective evidence that it has positively impacted medical education or the health of our population. This apparent inconsistency does not necessarily mean that renovations of medical education were not needed, simply that it would have been ideal to measure objective parameters of the system before and after implementation to determine whether “it’s working,” i.e., to subject the Outcome Project to the standards of accountability that it required of educators.
When challenged with this apparent inconsistency, Dr. Leach offered that systems as complex as medical education may be more amenable to qualitative methods of evaluation that are non-traditional in medical science, namely “developmental evaluation.” This technique, introduced by Michael Quinn Patton in 1994, is employed in “complex systems” like medical education to initiate and maintain “continuous improvement, adaptation, and intentional change” [31,32]. Purveyors of this technique embrace, or at least are comfortable with, uncertainty (i.e., they don’t need to do a before-after study with a P-value to feel comfortable with progress) and creatively craft solutions to local problems through iterative trial and error problem-solving. To the extent that the ACGME did not prescribe local educational practices (to implement the Outcome Project) and, rather, invited educators to do so suggests that it actively employed this methodology. In theory, incremental improvements would accumulate ― especially as programs shared their own local innovations — and at some point educators (and graduates) will look back and conclude “medical education is a whole lot better now than when I trained.”
Even though the Outcome Project provides a common vocabulary and clarifies explicitly domains of competence and subdomains of performance in each competency, there is little objective data to demonstrate that medical education has improved since 2001. Indeed, the explicit vocabulary facilitates opportunities for research regarding educational effectiveness even if educators have not yet strongly embraced this area of scholarship [29]. The vocabulary also describes explicitly domains for crafting “milestones,” i.e., objective measures of performance, the next phase of the ACGME educational revolution.
At the same time, educators may view the Outcome Project as an “unfunded mandate.” The task of educators increased exponentially, and many programs did not devote sufficient resources and extra faculty to administer it. There are no standards for how taxpayers’ graduate medical education dollars (which are substantial at $9 to 10 billion/year) are spent [22,33]. Some programs plow all the monies into education; others not so much. There are no requirements for how GME money is spent and no public reporting [34], but the unfunded mandate should have and could have been funded by simply coupling the Outcome Project with greater fiscal transparency [34].
Pros
While it may not be the type of scientific evidence that helps convince physicians, there is an unarguable objective outcome of the Outcome Project that has (peer-reviewed) merit. The Liaison Committee of Medical Education has embraced the Outcome Project as one of several constructs “of the knowledge, skills, behaviors, and attitudinal attributes appropriate for a physician” [35]. The competencies are also used for mandatory Ongoing Professional Practice Evaluations of physicians following training [36]. The Outcome Project vocabulary is also applied in more recent Carnegie [37] and Macy Foundation [38] projects that explore methods to enhance medical education. Accordingly, even with the shortcomings outlined above, this medical educational construct has permeated all levels of medical performance assessment and practice in the United States, from first-learners to advanced clinicians. As medical educators from other countries consider major redesigns of undergraduate and postgraduate medical curricula [39,40], both the Outcome Project and the remarkably similar CanMeds approach ― to define and delineate specific competencies — may provide a helpful framework or reference.
There is no scientific evidence that Flexner’s work improved medical education or health care. Rather, Flexner's revolution is judged by history as a success. His template for undergraduate medical education is still largely applied in most U.S. medical schools. David Leach’s project remains a work in progress, the results of which will not be known for 10 to 20 years or maybe never, since the changes afoot in medicine and society may dwarf any positive or negative impacts of the Outcome Project. Will the six competencies defined in 2001 remain at the core of medical education and performance-assessment in 2101? Only time will tell. But if so, these innovations could impact physicians and the millions of patients they treat as substantially as those introduced by Abraham Flexner 2 centuries before.
Abbreviations
- ACGME
Accreditation Council of Graduate Medical Education
- GME
graduate medical education
- CME
Council on Medical Education
References
- Flexner A. Medical education in the United States and Canada Bulletin Number 4 (The Flexner Report) The Carnegie Foundation for the Advancement of Teaching [Internet] [accessed 26 Mar 2012]. Available from: http://www.carnegiefoundation.org/publications/medical-education-united-states-and-canada-bulletin-number-four-flexner-report-0 .
- Bonner TN. Iconoclast: Abraham Flexner and a life in learning. Baltimore: Johns Hopkins University Press; 2002. [Google Scholar]
- Barzansky B. Abraham Flexner and the era of medical education reform. Acad Med. 2010;85:S19–S25. doi: 10.1097/ACM.0b013e3181f12bd1. [DOI] [PubMed] [Google Scholar]
- Duffy TP. The Flexner report ― 100 years later. Yale J Biol Med. 2011;84:269–276. [PMC free article] [PubMed] [Google Scholar]
- Cooke M, Irby DM, Sullivan W, Ludmerer KM. American medical education 100 years after the Flexner Report. N Engl J Med. 2007;355:1339–1344. doi: 10.1056/NEJMra055445. [DOI] [PubMed] [Google Scholar]
- Final Report of the Tuskegee Syphilis Study Ad Hoc Advisory Panel. Washington, DC: U.S. Department of Health, Education, and Welfare, Public Health Service; 1973. [Google Scholar]
- Protection of human subjects; Belmont Report: notice of report for public comment. Fed Regist. 1979;44(76):23191–23197. [PubMed] [Google Scholar]
- Ethical Guidelines for the Delivery of Health Services by DHEW. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [Internet] 1978. Available from: http://videocast.nih.gov/pdf/ohrp_ethical_guidelines_health_services.pdf .
- Beauchamp TL, Childress JF. Principles of Biomedical Ethics. New York: Oxford University Press; 2001. [Google Scholar]
- Rodwin MA. Patient accountability and quality of care: lessons from medical consumerism and the patients’ rights, women’s health and disability rights movements. Am J Law Med. 1994;20:147–167. [PubMed] [Google Scholar]
- Asch DA, Parker RM. The Libby Zion case. One step forward or two steps backward? N Engl J Med. 1988;318:771–775. doi: 10.1056/NEJM198803243181209. [DOI] [PubMed] [Google Scholar]
- Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG. et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study. Qual Saf Health Care. 2004;13(2):145–152. doi: 10.1136/qshc.2002.003822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leape LL, Brennan TA, Laird NM, Lawthers AG, Localio AR, Barnes BA. et al. The nature of adverse events in hospitalized patients: results of the Harvard Medical Practice Study. N Engl J Med. 1991;324(6):377–384. doi: 10.1056/NEJM199102073240605. [DOI] [PubMed] [Google Scholar]
- Meterko M, Nelson EC, Rubin HR, Batalden P, Berwick DM, Hays RD. et al. Patient judgments of hospital quality. Medical Care. 1990;28:S1–S56. [PubMed] [Google Scholar]
- Aristotle. In: Nicomachean Ethics. Ostwald M, translator and editor. New York: Macmillan; 1962. pp. 34–35. [Google Scholar]
- Wilson JQ. The rediscovery of character: private virtue and public policy. National Affairs. 1985;81:3–16. [Google Scholar]
- Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood) 2002;21(5):103–111. doi: 10.1377/hlthaff.21.5.103. [DOI] [PubMed] [Google Scholar]
- Benner P. From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Menlo Park, CA: Addison-Wesley; 1984. [Google Scholar]
- Benner P, Tanner C, Chesla C. From beginner to expert: gaining a differentiated clinical world in critical care nursing. ANS Adv Nurs Sci. 1992;14(3):13–28. doi: 10.1097/00012272-199203000-00005. [DOI] [PubMed] [Google Scholar]
- Sirovich BE, Gottlieb DJ, Welch HG, Fisher ES. Variation in the tendency of primary care physicians to intervene. Arch Intern Med. 2005;165:2252–2256. doi: 10.1001/archinte.165.19.2252. [DOI] [PubMed] [Google Scholar]
- Sirovich B, Gallagher PM, Wennberg DE, Fisher ES. Discretionary decision making by primary care physicians and the cost of U.S. health care. Health Aff (Millwood) 2008;27:813–823. doi: 10.1377/hlthaff.27.3.813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leach DC. Changing education to improve patient care. Qual Health Care. 2001;10(Suppl 2):ii54–ii58. doi: 10.1136/qhc.0100054... [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swing SR. International CBME Collaborators. Perspectives of competency-based medical education from the learning sciences. Med Teach. 2010;32(8):663–668. doi: 10.3109/0142159X.2010.500705. [DOI] [PubMed] [Google Scholar]
- CanMEDS2000: Extract from the CanMEDS 2000 Project Societal Needs Working Group Report. Med Teach. 2000;22(6):549–554. doi: 10.1080/01421590050175505. [DOI] [PubMed] [Google Scholar]
- Leung L, Becker CE. Sleep deprivation and house staff performance. Update 1984-1991. J Occup Med. 1992;34(12):1153–1160. [PubMed] [Google Scholar]
- Barger LK, Cade BE, Ayas NT, Cronin JW, Rosner B, Speizer FE. et al. Extended work shifts and the risk of motor vehicle crashes among interns. N Engl J Med. 2005;352:125–134. doi: 10.1056/NEJMoa041401. [DOI] [PubMed] [Google Scholar]
- Landrigan CP, Rothschild JM, Cronin JW, Kaushal R, Burdick E, Katz JT. et al. Effect of reducing interns’ work hours on serious medical errors in intensive care units. N Engl J Med. 2004;51(18):1838–1848. doi: 10.1056/NEJMoa041406. [DOI] [PubMed] [Google Scholar]
- Falleti MG, Maruff P, Collie A, Darby DG, McStephen M. Qualitative similarities in cognitive impairment associated with 24 h of sustained wakefulness and a blood alcohol concentration of 0.05% J Sleep Res. 2003;12:265–274. doi: 10.1111/j.1365-2869.2003.00363.x. [DOI] [PubMed] [Google Scholar]
- Sullivan GM. Members of the Editorial Board. Publishing your work in the Journal of Graduate Medical Education. J Grad Med Educ. 2010;2(4):493–495. doi: 10.4300/JGME-D-10-00188.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garibaldi RA, Trontell MC, Waxman H, Holbrook JH, Kanya DT, Khoshbin S. et al. The in-training examination in internal medicine. Ann Intern Med. 1994;121:117–123. doi: 10.7326/0003-4819-121-2-199407150-00008. [DOI] [PubMed] [Google Scholar]
- Patton MQ. Developmental evaluation. Am J Eval. 1994;15:311–319. [Google Scholar]
- Fagen MC, Redman SD, Stacks J, Barrett V, Thullen B, Altenor S. et al. Developmental evaluation: building innovations in complex environments. Health Prom Pract. 2011;12:645–650. doi: 10.1177/1524839911412596. [DOI] [PubMed] [Google Scholar]
- Rye B. Assessing the impact of potential cuts in Medicare doctor-training subsidies. Bloomberg Government Studies [Internet] 2012. [accessed 24 Jan 2013]. Available from: http://about.bgov.com/bgov/files/2012/03/ryestudy.pdf .
- Chaudhry SI, Khanijo S, Halvorsen AJ, McDonald FS, Patel K. Accountability and transparency in graduate medical education expenditures. Am J Med. 2012;125:517–522. doi: 10.1016/j.amjmed.2012.01.007. [DOI] [PubMed] [Google Scholar]
- Standards for Accreditation of Medical Education Programs Leading to the M.D. Degree, 2011. The Liaison Committee of Medical Education [Internet] [accessed 26 Apr 2012]. Available from: http://www.lcme.org .
- Catalano EW, Redman SG, Talbert ML, Knapman DG. Members of Practice Management Committee, College of American Pathologists. College of American pathologists considerations for the delineation of pathology clinical privileges. Arch Pathol Lab Med. 2009;133(4):613–618. doi: 10.5858/133.4.613. [DOI] [PubMed] [Google Scholar]
- Cooke M, Irby DM, O’Brien BC. Educating Physicians: A call for reform of medical school and residency. Stanford, CA: Josey Bass and the Carnegie Foundation for the Advancement of Teaching; 2010. [Google Scholar]
- Continuing education in the health professions. The Josiah Macy Jr. Foundation [Internet] [accessed 26 Apr 2012]. Available from: http://macyfoundation.org/publications/publication/conference-proceedings-continuing-education-in-the-health-professions .
- Pales J, Gual A. Medical education in Spain: current status and new challenges. Arch Pathol Lab Med. 2008;30:365–369. doi: 10.1080/01421590801974923. [DOI] [PubMed] [Google Scholar]
- Temple J. Time for training. A Review of the Impact of the European Working Time Directive on the quality of training. Medical Education England [Internet] [accessed 3 Apr 2013]. Available from: http://www.mee.nhs.uk/PDF/14274%20Bookmark%20Web%20Version.pdf .
