Skip to main content
Journal of Microbiology & Biology Education logoLink to Journal of Microbiology & Biology Education
. 2012 May 3;13(1):28–31. doi: 10.1128/jmbe.v13i1.360

Ethical and Practical Similarities Between Pedagogical and Clinical Research

Rachel L Robson 1,*, Vaughn E Huckfeldt 2
PMCID: PMC3577290  PMID: 23653778

Abstract

Clinical research and educational research face similar practical and ethical constraints that impact the rigor of both kinds of studies. Practical constraints facing undergraduate science education research include small sample sizes (largely a result of disproportionate incentives to conduct educational research at small colleges versus large universities), and the impossibility of randomizing individual students to separate arms of a study. Ethical constraints include gaining the informed consent and assuring the confidentiality of study participants, and the requirement of equipoise (i.e., that it is unethical to subject some study participants to an experimental treatment that researchers have good reason to believe to be inferior to another treatment). While these constraints have long been recognized for clinical research, their implications for educational research have not been fully recognized. Criticism that educational research lacks rigor should be tempered by the recognition that educational research is not parallel to laboratory research, but is parallel to clinical research. These parallels suggest solutions to some of the practical and ethical difficulties faced by educational researchers, as well.

INTRODUCTION

A movement to improve undergraduate science education through implementing empirically proven pedagogical techniques has gained momentum over the past two decades. Coincident with this movement, research in undergraduate science pedagogy has increased. As more research is done to determine whether particular pedagogical interventions are effective in helping students learn science, so has criticism of such research grown. Among the most common criticisms of studies in undergraduate science education are that most studies involve too few subjects for results to be generalizable, that subjects in studies are rarely randomized to different treatment groups, and that, indeed, most published studies of pedagogical interventions lack a negative control group entirely (15, 16).

Such critics imply that education research is parallel to laboratory research, and that the same criteria for determining quality should apply. But education research is not parallel to laboratory research. It’s parallel to clinical research. This has both practical and ethical implications for establishing the standards to which educational studies should be held. The aim of this paper is not to address the practical and ethical demands on educators in general, but rather the practical and ethical constraints faced by educators in performing pedagogical research.

Practical constraints facing educational and clinical research

Like clinical research, education research often faces practical challenges in recruiting large numbers of study participants. This is in part because a disproportionate amount of the research on undergraduate science pedagogy is conducted at small colleges, whether community colleges or private liberal arts schools. The archives of JMBE and CBE-Life Sciences Education illustrate this. In its history, nearly one-third of all Curriculum or Research articles published in JMBE have been authored by faculty at colleges with fewer than 5000 undergraduates enrolled (n = 21 of 71 such articles published in JMBE since Volume 1, 2000). Similarly, 26% (n = 54) of the 207 research articles ever published in CBE-Life Sciences Education have at least one author from a small college. A lack of incentives to improve teaching, much less to conduct education research rather than basic science research, is the norm at many large colleges and universities in the United States (17). The reverse is typically true at small colleges. Not only are small college professors evaluated for promotion and tenure primarily or exclusively on teaching effectiveness, based on both student course evaluations and assessment data, but also small colleges typically lack the resources to make significant basic science research feasible. Conducting pedagogical research is thus more attractive for small college science professors than it typically is for those at large universities. These factors may make science faculty at small colleges more likely to conduct education research than their peers at large universities. But small colleges, by their nature, cannot provide large pools of potential research subjects for education researchers in the way large universities can. Of course, it is the case that some large schools reward pedagogical research as smaller schools do. But large schools that provide incentives for pedagogical research in biology are relatively rare. Of the 45 Research articles published in JMBE over the past decade that were authored by faculty at institutions enrolling more than 5000 students, almost one-quarter (n = 12) were written by researchers at just three schools (University of Maryland-College Park, Cornell University, and University of Puerto Rico). The number of large universities represented in the archives of CBE-Life Sciences Education was similarly limited, with the University of Wisconsin, the University of Colorado, and again the University of Maryland-College Park contributing the bulk of research articles. Even large universities that have begun to reward educational research have found that science faculty are less likely to implement such studies than they are to pursue basic science research; this is true even among faculty who have acquired specializations in science education research (6). At present, for most science faculty at colleges with enrollments high enough to allow for large sample sizes in such studies, there is slight incentive to participate in studies of novel pedagogical techniques.

Clinical research can share the practical problem of recruiting large numbers of study participants with undergraduate educational research. Prestigious medical journals such as the New England Journal of Medicine publish a few reports of single cases, or case series, in every issue. Entire medical journals exist that publish nothing but case reports, and studies of the etiology or treatment of rare diseases often have fewer than 100 subjects enrolled in each arm of the study. But such clinical research articles are not dismissed by critics for failing to overcome the inherent practical hurdles of participant recruitment. And clinical research, unlike pedagogical research, benefits from a massive infrastructure and from financial incentives to ensure the validity of reported results.

One way that clinical research addresses the problems posed by small sample sizes is to randomize study participants into separate arms of a trial. This has been recommended as a solution to the problems posed by small sample sizes in studies of the efficacy of new pedagogies, as well (16). However, randomization of individual students to separate arms of an educational study is not merely impractical but, in most cases, is impossible. Undergraduates at virtually all colleges choose their own schedules, and thus can’t be randomly assigned to experimental versus control sections of a class. Even randomization of sections to different treatments is nearly impossible at the small colleges most likely to reward pedagogical research, as all sections of a single, highly-enrolled course are typically taught by one professor. And randomization of students to separate arms of a study may pose an ethical problem that has long been recognized by clinicians conducting analogous research.

Ethical constraints facing educational and clinical research

The standard of equipoise, while recognized world-wide as key to the ethical conduct of clinical research (3, 19, 22), has not been clearly articulated as applying to pedagogical research. The standard of equipoise is that it is unethical to subject some study participants to an experimental treatment that researchers have good reason to believe to be inferior to another treatment. As such, equipoise is an ethical requirement that uncertainty must exist about whether one treatment is better than another treatment given to participants.

There has been some debate about the proper interpretation of the equipoise requirement, primarily whether it should apply to uncertainty for individual researchers (9), or whether it should apply to uncertainty in the relevant expert community (8). But in either case, equipoise is based on the special relationship between patients, who also comprise a pool of potential research subjects, and physicians who typically conduct such trials (8, 9). This special physician–patient relationship is a relationship of trust, given by the patient and accepted by the physician, that the physician will act in the patient’s best interests and provide the best care possible (8, 9, 20, 21, 22). In addition to this relationship of trust, there is a power differential between the patient and the physician, based primarily on the difference between the physician’s substantial knowledge and the relative ignorance of a patient (10, 14). Together, the relationship of trust and the power differential constitute the physician–patient relationship. This makes it incumbent on the physician not to subject a patient to treatments that the physician has good reason to believe are inferior, even if doing so improves the rigor of a clinical trial that the physician is conducting.

The relationship between educators and students is parallel to that between physicians and patients. Students and educators have a relationship of trust that students will be provided with the best education that is possible for the educator to provide (2, 5). Also, as between physicians and patients, a large power differential exists between students and educators, and likewise, this power differential is based in an inequality in knowledge where the educator has substantial knowledge and the student likely has relative ignorance (2). Furthermore, an additional power differential between educators and students exists when students are required (given their previous choices of school to attend, major field of study, etc.) to take a particular class (which at small colleges may only be offered by one educator). The requirement to take a particular class removes some of students’ power by removing their choices, and the requirement further increases the power inequality between student and educator when students are unable to easily avoid taking a particular class from a particular educator.

This special relationship between educators and students, constituted by the relationship of trust and the inequalities in power, is therefore similar to the physician– patient relationship. Given that such relationships provide the foundation for the ethical requirement of equipoise, education researchers enlisting students in a study of new pedagogical techniques should adhere to equipoise in study design, just as clinical researchers do. While the stakes for clinical research are often higher than those for educational research, the ethical limitations imposed by these parallel relationships are similar.

The requirement of equipoise in pedagogical research does impose restrictions on ethically permissible study designs. Although modern pedagogical research is a relatively young field, there nevertheless exists a growing consensus among pedagogical experts that active learning techniques are superior to pure lecture-based teaching in undergraduate education (11, 12, 13). This consensus is analogous to the common situation in clinical research in which the expert community agrees that a current standard treatment is superior to a placebo. Analogously to the way equipoise forbids the inclusion of placebo arms in such cases in clinical research, equipoise in pedagogical research would forbid the inclusion of a pure lecture-based arm in studies of active-learning pedagogical techniques. At present, the majority of undergraduate science classes are taught using traditional lectures, and active learning pedagogies may be no more effective than lectures in fostering student learning when used by faculty who are not engaged in educational research (4). But how educators who are not also pedagogical researchers conduct class is not relevant to the ethical obligations of pedagogical researchers to their students. For faculty who are educational researchers, there is little uncertainty as to whether active learning pedagogies are superior to lecture alone (4). Further, any particular new pedagogical technique may be not only untested, but also never before considered by the expert community. Hence, in such cases it is reasonable for the individual researcher to default to personal equipoise as the ethical standard. The individual researcher will, in many cases, have some evidence that a new pedagogical technique is superior to an older technique. In this case, equipoise would forbid the inclusion of an arm using the older technique in a study of the new technique. Indeed, the National Research Council has recommended randomized trials of pedagogical techniques at the K–12 level should be conducted only if true uncertainty exists regarding the benefits of each arm (18).

Both clinical and pedagogical research involves the use of human subjects. Because of this reliance on human subjects, other ethical constraints on clinical research apply to pedagogical research, as well. As is the case with clinical research, the standards of informed consent, freedom from coercion, and participant confidentiality should also apply to pedagogical research. Student participation in a pedagogical study should thus not be tied to grades in a class, and the identity of study participants should not be disclosed.

Solutions to constraints on educational research suggested by clinical research

Undergraduate educational research is a relatively new field. As this field becomes more established, it is likely that larger colleges and universities will increasingly reward pedagogical research in the same way that they currently reward basic science research. If that happens, many of the problems of small sample sizes in undergraduate educational research may be ameliorated. The more intractable problems of randomization and equipoise will remain.

Lessons learned from clinical research can be used to address these problems. Clinical researchers have addressed the problem posed by small sample sizes in any given study by attempting to improve the generalizability of study results, and the potential for those small-study results to be used in meta-analyses. Clinical researchers have addressed the problem posed by the inability to randomize subjects to treatment arms by improving the rigor with which subjects’ outcomes are assessed. Clinical researchers have addressed the problem posed by the ethical requirement of equipoise by tracking subjects’ outcomes over time rather than just at a single time point. All of these solutions are available to educational researchers, as well.

Generalizability of educational studies can be improved if interventions and assessment methods are applicable to schools beyond the one at which the research was conducted. Equipment and supplies used in pedagogical interventions should be generally available to undergraduate educators. Pedagogical interventions that rely on strains of organisms that are not commercially available should only be published with the promise that these supplies will be provided upon request to researchers at other schools. Cost of supplies for pedagogical interventions should be considered, recognizing that researchers at colleges with smaller budgets may not be able to replicate studies on teaching interventions that are possible at wealthier institutions. Pedagogical interventions studied should be scalable across institutions of various sizes, as well. These ideas inform the criteria for the Science Prize in Inquiry-based Instruction, announced last year (1). Assessment tools used to measure learning outcomes should be published as appendices to all articles. Certainly, external validity of educational research results can be improved if assessment tools are standardized, and methodologies shared, as others have suggested (15, 16). This is why journals such as JMBE are so important. But at present, such standards to improve generalizability are inconsistently followed. In the 2011 volume of JMBE, only half of the four Research section articles provided their assessment tools as published appendices.

The rigor with which educational outcomes are assessed can also be improved by adopting methods used in clinical research to address analogous problems. Wherever possible, outcomes assessed should be able to be determined objectively (e.g., as scores on a multiple-choice test), and should represent students’ actual knowledge, not merely students’ self-perceived learning gains. Statistical analysis should rule out randomness as a possible explanation for students’ scores on multiple-choice tests before any other conclusions about student outcomes are drawn. Of course, assessment tools that use open-ended questions are often preferred for a variety of reasons. In studies that rely on qualitative judgments of students’ answers on such open-ended questions, at least two independent, blinded raters should be used. Inter-rater reliability, preferably as a kappa statistic (7), should be reported. Rubrics used by raters should be published as appendices to articles so that studies using open-ended assessment tools can be replicated.

Students, like patients, cannot ethically be assigned to a treatment that researchers have good reason to believe is inferior to another. While this constraint precludes some comparative studies of different pedagogical techniques, it can be addressed by including pre-intervention and post-intervention assessments of student understanding. Underlying characteristics of the population of students being studied (such as demographics and previous educational experience) should be reported. Measurement of internal validity of pre-intervention and post-intervention assessment tools should be reported, especially if a study is the first to use a particular assessment tool.

These recommendations may seem daunting, especially to educators who are new to educational research. Fortunately, many colleges and universities have support mechanisms, such as centers for teaching and learning, which can help new educational researchers navigate the difficulties of both study design and ethical requirements.

Clinical research and educational research face similar practical and ethical constraints because of their reliance on human subjects with whom researchers have a unique relationship. Stakes for clinical researchers are high, higher than they are for educational researchers. Patients may die waiting for the completion of a clinical trial of a drug that would have saved them. But despite these constraints, medical research has progressed. Educational research has—and will continue—to progress, as well. To make that progress, educational researchers should be cognizant not only of the practical and ethical difficulties shared by educational and clinical research, but also of the possibility for shared solutions.

ACKNOWLEDGMENTS

The authors declare that there are no conflicts of interest.

REFERENCES

  • 1.Alberts B. A new college Science prize [editorial] Science. 2011;331:10. doi: 10.1126/science.1202096. [DOI] [PubMed] [Google Scholar]
  • 2.American College Personnel Association. Statement of ethical principles and standards. ACPA-College Student Educators International; Washington, DC: 2006. [Google Scholar]
  • 3.American Medical Association. Code of medical ethics: current opinions with annotations 2010–2011. AMA; Chicago, IL: 2010. [Google Scholar]
  • 4.Andrews TM, Leonard MJ, Colgrove CA, Kalinowski ST. Active learning not associated with student learning in a random sample of college Biology courses. CBE-Life Sci. Educ. 2011;10:394–405. doi: 10.1187/cbe.11-07-0061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Association of American Educators Advisory Board and Executive Committee. Code of ethics for educators. Association of American Educators; Alexandria, VA: 2011. [Google Scholar]
  • 6.Bush SD, et al. Science faculty with education specialties. Science. 2008;322:1795–1796. doi: 10.1126/science.1162072. [DOI] [PubMed] [Google Scholar]
  • 7.Cohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 1960;20:37–46. doi: 10.1177/001316446002000104. [DOI] [Google Scholar]
  • 8.Freedman B. Equipoise and the ethics of clinical research. N. Eng. J. Med. 1987;317:141–145. doi: 10.1056/NEJM198707163170304. [DOI] [PubMed] [Google Scholar]
  • 9.Fried C. Medical experimentation: personal integrity and social policy. North-Holland Publishing; Amsterdam: 1974. [Google Scholar]
  • 10.Hellman S, Hellman D. Of mice but not men: problems of the randomized clinical trial. N. Eng. J. Med. 1991;324:1585–1589. doi: 10.1056/NEJM199105303242208. [DOI] [PubMed] [Google Scholar]
  • 11.Herreid CF. It’s all their fault? JMBE. 2010;11:34–37. doi: 10.1128/jmbe.v11i1.138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Knight JK, Wood WB. Teaching more by lecturing less. CBE-Life Sci. Educ. 2005;4:298–310. doi: 10.1187/05-06-0082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mazur E. Farewell, lecture? Science. 2009;323:50–51. doi: 10.1126/science.1168927. [DOI] [PubMed] [Google Scholar]
  • 14.Miller PB, Weijer C. Rehabilitating equipoise. Kennedy Institute of Ethics J. 2003;13:93–118. doi: 10.1353/ken.2003.0014. [DOI] [PubMed] [Google Scholar]
  • 15.Nehm RH. Faith-based evolution education. BioScience. 2006;56:638–639. doi: 10.1641/0006-3568(2006)56[638:FEE]2.0.CO;2. [DOI] [Google Scholar]
  • 16.Ruiz-Primo MA, Briggs D, Iverson H, Talbot R, Shepard LA. Impact of undergraduate Science course innovations on learning. Science. 2011;331:1269–1270. doi: 10.1126/science.1198976. [DOI] [PubMed] [Google Scholar]
  • 17.Sakvar V, Lokere J. Time to decide: the ambivalence of the world of science toward education. Nature Education; Cambridge, MA: 2010. [Google Scholar]
  • 18.Towne L, Hilton M, editors. Implementing randomized field trials in education: report of a workshop. National Academies Press; Washington, DC: 2004. [Google Scholar]
  • 19.Williams JR. World Medical Association Medical Ethics Manual. 2nd edition. World Medical Association; Cedex, France: 2009. [Google Scholar]
  • 20.World Medical Association. Declaration of Geneva. 173rd Council Session; Divonne-les-Bains, France. Cedex, France: World Medical Association; 2006. [Google Scholar]
  • 21.World Medical Association. International Code of Medical Ethics. 57th WMA General Assembly; Pilanesberg, South Africa. Cedex, France: World Medical Association; 2006. [Google Scholar]
  • 22.World Medical Association. Declaration of Helsinki. 59th WMA General Assembly; Seoul, South Korea. Cedex, France: World Medical Association; 2008. [Google Scholar]

Articles from Journal of Microbiology & Biology Education : JMBE are provided here courtesy of American Society for Microbiology (ASM)

RESOURCES