Academic medicine has been described as a three-legged stool with one leg each for teaching, patient care, and research. The assumption is that everyone knows how difficult it is to balance on those three legs without either the faculty or the housestaff falling off the stool and flat on their academic reputation. Historically for those training programs not based in the major medical school setting, the short leg has usually been research. Until quite recently, “scholarly activity” was defined by the medical school standard of publications in peer-reviewed journals and presentations at national meetings. In other words, research and scholarly activity were interchangeable synonyms for that third—and usually shortest—leg of the stool. I would like to explore how we got there, where we are now and where I think we are going in the near term.
HISTORY
The history of modern medical education is usually divided into three distinct eras, beginning with the Flexner Report of 1910. Abraham Flexner's report, written for the Carnegie Foundation (1), exposed the chaos of medical education at the turn of the century and set out the blueprint for medical school education as we know it: university-based medical schools staffed by faculty engaged in original clinical and basic research and populated with medical students actively learning through both clinical practice and investigation. Proprietary medical schools, profit-based hospitals, learning by rote with little or no patient contact, and brief medical apprenticeships were finally condemned. Though the revolution that culminated in Flexner's report was begun in American medical schools in the 1870s, much as World War I marked the true end of the 19th century, the Flexner Report marked the end of 19th century medical education. Although all three legs of the stool were finally recognized as important, modern medicine passed through three distinct eras in the 20th century. In each era, one leg of the stool was longer.
The three eras of modern medicine are usually accepted to be: first, that period from World War I to World War II; second, the era from World War II to around 1965; and third, the period from 1965 until the present (2). The era between the wars was the “educational era” in which the focus was getting the medical school curriculum established and developing the best methods of turning medical students into competent physicians. It was during this period that the standard 4-year curriculum, divided roughly in half between a rigorous basic science curriculum and clinical experience, was standardized. The focus was on medical students and the amount of information transfer was manageable. There was little in the way of post-graduate specialty training. The product of American medical schools became the envy of the world.
The period after World War II was the “research era.” Driven by incredible developments in technology, research methodology, and federal funding, medical school research became ascendant. With the expansion of the National Institutes of Health during this era, medical school budgets came to be funded by as much as 60% by federal grants and contracts. In this time frame, it was common for the dominant focus of medical school faculty to be research and not clinical practice or even teaching. In addition, it was during this era that medical schools became full-fledged academic equals with the other colleges in their respective universities. It was also during this era that specialty residency training became more defined and matured into the system we now know. This research-dominated culture changed dramatically with passage of the Medicare (Social Securities Amendment) Act in 1965.
Suddenly, with the establishment of Medicare and Medicaid as a funding source for clinical practice, medical school professors were paid for taking care of patients. Millions of indigent patients on charity wards throughout the medical school system became paying patients overnight. The emphasis quickly shifted along with the dollars, and clinical practice was pushed to the fore, as taking care of patients became more than just a source of teaching material or research data. During this period surgeons—or any physician group with a lavishly reimbursed procedure—became the sweethearts of the medical schools, hauling in a rich harvest of cash for the university. Medicare even came to pay for medical education itself, by reimbursing teaching hospitals for the money supposedly lost through the inefficiency of caring for patients and time spent in teaching housestaff. For one brief shining moment, there seemed to be enough money and resources available to balance the stool: enough patients for teaching, enough grant money and time for research, and enough government money to be paid for clinical practice.
PRESENT SITUATION
Although we are still in the waning phases of this era, by the 1990s the exponential expansion of medical technology, shifting demographics and public expectations had combined to exhaust the available resources. So we have managed care combined with steadily decreasing government funding for both medical education (through Medicare), and research. We see a steady ratcheting up of the number of patients to be seen, procedures to be done and forms to be filled out, combined with less time for teaching and no more protected time for research. What do you mean, he is in the lab? Why is he not seeing patients nor doing surgery? The three-legged stool is tilting again. As the patient care leg grows longer and longer, the teaching leg and especially the research legs are growing shorter and shorter.
In the short-lived era in the 1970s and 80s when the stool was almost in equilibrium, individual medical departments were able to balance their faculty with doctors of varying interests and aptitudes. Since most of you reading this article probably trained during this era, I am certain you remember most of your mentors as either great teachers, great investigators or great clinicians. Many combined two of these traits—a respected clinician and teacher, for instance—but it was rare to find the triple threat. Those that had lengthy bibliographies were usually generating lines of investigation that were actually pursued by some PhD in their laboratory, or were doing clinical research often pulled together with little more than, “Say, Mike, why don't you look up the hip replacements I've done after Girdlestone arthroplasty?”
As the demands of clinical practice soared, it became more and more difficult for medical educators to be even two, let alone three, things at once. We struggled to remain good teachers while cranking out the patients. The research that had once driven medical education in the period from World War II to the inception of Medicare in 1965 became the poor cousin of the busy clinician. At the same time, there was a justifiable expansion of the regulation of clinical research. The establishment of Institutional Review Boards (IRBs) in 1974, the publication of the Belmont Report (3) in 1978, the increased public scrutiny given to several highly visible human research-related calamities (e.g., suspension of federally-funded research at Johns Hopkins in 2001 after reporting the death of a human volunteer in a study), and most recently the HIPAA regulations have collectively raised the bar for clinical research to dizzying levels.
Dizzying perhaps, but few would argue that the current regulatory fever is unfounded or unnecessary. When I was a medical student in the early 1970s, if you wanted to do a paper, you made a hypothesis, wrote your own protocol and had at it. Study design, protection of patient privacy, and ethics were largely up to the individual department, if not the individual investigator. By the time I was a resident in the mid 1970's there were the first rudimentary institutional review boards, but it was still quite loose. Where I trained at the Mayo Clinic, I suspect we were actually progressive in adhering to the emerging regulations and were well enough staffed and funded to comply, but this was not the norm until well into the 1980s. Patient privacy was certainly low on the list of priorities while conducting clinical research. It was common to see a patient's name or initials on an x-ray in a presentation or even in a paper. Databases were unprotected and unsecured. Necessary as all these regulations are, they have added significantly to the administrative burden of doing quality clinical research at a time when clinicians are finding it even harder to carve time away from the demands of seeing patients.
NEW DEFINITION OF SCHOLARLY ACTIVITY
So, if that is where we are now—increased demands of clinical practice, heightened regulation of research activities, decreased funding for both research and medical education—where does this leave medical education? What is a residency program director or a department chair to do? The days of the person who sees 10 patients a week, operates one day, and is cranking out papers in the lab for the other three days are gone. Unless he or she brings to the department an NIH grant and significant independent (usually corporate) funding, he or she has gone the way of the passenger pigeon. I think it is in response to these changes that the Accreditation Council for Graduate Medical Education (ACGME), in its most recent iteration of the common requirements for all residency programs, has redefined scholarly activity. Here is the new language:
“The responsibility for establishing and maintaining an environment of inquiry and scholarship rests with the faculty, and an active research component must be included within each program. Both faculty and residents must participate actively in scholarly activity. Scholarship is defined as one of the following:
The scholarship of discovery, as evidenced by peer-reviewed funding or publication of original research in peer-reviewed journals.
The scholarship of dissemination, as evidenced by review articles or chapters in textbooks.
The scholarship of application, as evidenced by the publication or presentation at local, regional, or national professional and scientific meetings, for example, case reports or clinical series.
Active participation of the teaching staff in clinical discussions, rounds, journal club, and research conferences in a manner that promotes a spirit of inquiry and scholarship; offering of guidance and technical support, e.g. research design, statistical analysis, for residents involved in research; and provision of support for resident participation as appropriate in scholarly activities.”(4)
Now, this language leaves a little room for maneuver. There is a place here for the investigator, the teacher, and the clinician—as long as he actively participates in the teaching program in a way that “promotes a spirit of inquiry and scholarship.” Though there is still an emphasis on publications and presentations, there is recognition that scholarly activity can mean teaching at journal club, in conferences and at the bedside. Even more striking to me is the notion that scholarly activity is defined as not all, but “one of the following.” This seems to me to be a recognition that the days of the triple threat are behind us.
In the end, it will be up to the interpretation of the ACGME through its individual Residency Review Committees, reviewing the scholarship of individual residency programs, that will tell us if this is, in fact, a new definition of scholarly activity. Nevertheless, I am encouraged. I would encourage all residency and fellowship program directors to begin measuring their faculty's scholarship according to these four criteria: the scholarship of discovery, the scholarship of dissemination, the scholarship of application, and active participation. Surely, a department that has no publications or presentations and only active participation to show for scholarly activity will be cited for these shortcomings. However, a balanced approach with some members publishing, while others take a more active role in teaching, conducting journal clubs and even participating in study design and critique, should fall well within compliance of this newer language.
Therefore, as we approach the centennial of the Flexner Report of 1910, we have seen medical education shifting its balance, constantly trying to stay up on that three-legged stool, teetering first on the side of teaching, then research and finally clinical practice. This balancing act is as old as the modern era of medical education. Perhaps a new definition of the scholarly activity will finally make it easier to achieve that balance; and we can get on with the business of training the next generation of physicians.
REFERENCES
- Flexner A. Medical Education in the United States and Canada. 1910 doi: 10.1126/science.32.810.41. New York: Carnegie Foundation for the Advancement of Teaching. [DOI] [PubMed] [Google Scholar]
- Ludmerer K. M. Time to Heal: American Medical Education from the Turn of the Century to the Era of Managed Care. 1999 doi: 10.1097/01.blo.0000131257.59585.b0. New York: Oxford University Press. [DOI] [PubMed] [Google Scholar]
- The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. 1978 Washington DC: US Government Printing Office. [PubMed] [Google Scholar]
- ACGME “Common Program Requirements,”. July 2003 V.E.1. www.acgme.org/DutyHours/dutyHoursCommonPR.pdf. [Google Scholar]