Skip to main content
The Journal of Medicine and Philosophy logoLink to The Journal of Medicine and Philosophy
. 2016 Jun 14;41(4):401–415. doi: 10.1093/jmp/jhw011

Moral Expertise in the Clinic: Lessons Learned from Medicine and Science

Leah McClimans 1,2,*, Anne Slowther 1,2
PMCID: PMC4986003  PMID: 27302969

Abstract

Philosophers and others have questioned whether or not expertise in morality is possible. This debate is not only theoretical, but also affects the perceived legitimacy of clinical ethicists. One argument against moral expertise is that in a pluralistic society with competing moral theories no one can claim expertise regarding what another ought morally to do. There are simply too many reasonable moral values and intuitions that affect theory choice and its application; expertise is epistemically uniform. In this article, we discuss how similar concerns have recently threatened to undermine expertise in medicine and science. In contrast, we argue that the application of values is needed to exercise medical, scientific, and moral expertise. As long as these values are made explicit, worries about a pretense to authority in the context of a liberal democracy are ill-conceived. In conclusion, we argue for an expertise that is epistemically diverse.

Keywords: medical expertise, moral expertise, scientific expertise, values

I. INTRODUCTION

Philosophers and other theorists have questioned whether or not expertise in morality is possible (e.g., Cholbi, 2007, 323–34; Gesang, 2010, 153–9; Archard, 2011, 19–127). This debate is not only theoretical, but also affects the perceived legitimacy of clinical ethicists (Shalit, 1997, 24–28; Rasmussen, 2011, 649–50). If clinical ethicists are to be legitimate members of the healthcare team, then they should provide a form of expertise as other members of the healthcare team do. The expertise that clinical ethicists are typically expected to provide concerns how one ought morally to act. But, this demand for moral expertise has often been taken to be problematic. If moral expertise consists of knowing what one ought morally to do, then clinical ethicists working in a pluralistic and democratic society cannot claim this knowledge for someone else, for example, a patient or clinician. The difficulty resides in the fact that in a pluralistic society there are competing moral theories that guide action, as well as a large range of reasonable values and moral intuitions that affect not only what theory one applies to a particular moral decision, but also how one applies it. Thus, a clinical ethicist’s training cannot ground a claim to authority over what one morally ought to do; there are simply too many legitimate values and moral intuitions.

To highlight the difficulty that competing moral theories, values, and intuitions present, moral expertise is often contrasted with gold standards of expertise, for example, clinical and scientific expertise (Cholbi, 2007, 324–6; Varelius, 2008, 127–32; Gesang, 2010, 156–8; Cowley, 2012, 337–41). These comparisons are meant to emphasize the inadequacies of moral expertise. But, such a framing of the problem is anachronistic. Many of the supposed difficulties with moral expertise are not exceptional; it is now widely accepted that experts in medicine and science also regularly face a plurality of reasonable choices, which require for their resolution individual judgment and the application of values (e.g., Longino, 1990; Montgomery, 2005; Douglas, 2009).

Unfortunately, one of the effects of the recognition of the role that values and subjective judgment play in medicine and science has been to undermine confidence in these forms of expertise (Evidence-Based Medicine Working Group, 1992, 2420–5; Douglas, 2009). Those who argue for the continued worth of expert judgment do so by developing accounts of medicine and science that embrace the values that they understand as inherent to each discipline. Doing so alters the traditional account of expertise as epistemically uniform and replaces it with an account that recognizes epistemic diversity.

II. MORAL EXPERTISE AND THE GOLD STANDARDS

The theoretical literature on the possibility of moral expertise presupposes a contrast—explicit or implicit—with other supposedly less questionable forms of expertise. The nature of these forms of expertise vary, but it is not uncommon for expertise in medicine or science to be treated as the gold standard. Consider a few examples.

In Gesang’s (2010, 153–9) article, “Are moral philosophers moral experts?” he argues that ethicists are only weak or semi-experts. Although ethicists are experts, that is, they reach correct moral judgments with a high probability and for the right reasons, they cannot silence the opinions of nonexperts. For Gesang (2010, 158–9), silencing nonexperts is the litmus test for determining expertise in what he calls its full or strong sense. As Gesang (2010, 158) notes, the physicist can say to his student “You’re wrong,” but the clinical ethicist cannot do the same to those on his clinical ethics committee.

Moral expertise is weak because it is not epistemically uniform. That is, we cannot expect that argument and reflection will yield similar moral conclusions, as we often expect in medicine and science that well-designed trials and experiments will yield similar conclusions. Gesang (2010, 156–7) considers at some length why moral judgments and experience differ in this respect from scientific judgments and experience.

He argues, for instance, that in science, unlike morality, there is a priority of observation over theory (Gesang, 2010, 156). This priority is due, he suggests, to the fact that experience is less problematic in science than in morality. Scientific theories attend to our observations and must account for these experiences; as a result, they cannot for long stand at odds with observations or experimental results. Moral theories, on the other hand, cannot attend to experience in the same way because our moral experiences are affected by our moral theories and values (Gesang, 2010, 157). Given the dialectic between moral theory and moral experience, and given that the result of this dialectic will differ among individuals with different legitimate moral values and intuitions, moral expertise cannot be strong.

But if moral expertise cannot be strong, for Gesang it can still be a form of expertise. In his response to Gesang, Cowley (2012, 337–42) disagrees: morality is not an expertise of any kind. It is interesting that his argument with Gesang is primarily carried out over how best to understand morality’s contrast with science. According to Cowley (2012, 338–41), Gesang does not recognize the implications of the differences between moral and nonmoral experience. Primary among these differences is that moral judgments cannot be “correct” with a “high probability” and for the “right reasons”, that is, morality cannot be an expertise.

For Cowley (2012, 340), this language of “high probability” and “right reasons” is the language of science or medicine. In these disciplines, one’s predictions or diagnoses can be correct with respect to observation, experimental, or lab results. But, this is not the language that can be used to reference moral judgments. When we justify moral judgments, we do so by articulating what and why certain considerations were decisive in a particular instance, but precisely because moral landscapes are not shared as physical landscapes are, we have no basis on which to call our decisive considerations the “right” ones (Cowley, 2012, 341).

Moreover, when it comes to determining whether our judgments are “correct,” we have similar difficulties. According to Cowley (2012, 340), scientists usually agree about what kind of experimental result they need to corroborate a hypothesis, and even in those cases where this is not true they still usually agree about what type of experiment is needed to test an hypothesis. Ethicists, however, do not enjoy this kind of agreement; e.g., what kind of moral experience or outcome could possibly be decisive in the abortion debate? Again, Cowley (2012, 340) attributes this difference to the fact that for science interpreting observations and experimental results is more or less unproblematic, while moral experiences yield different judgments for different people. As Cowley (2012, 340) puts the point, “All scientists are answerable to a singular realm of discoverable facts. But the same facts may well have different moral significance for different individuals.”

Shalit (1997, 24–8) makes a similar point, using medicine as the gold standard. She argues that the main difficulty with treating clinical ethicists as moral experts is that the criteria on which they base their recommendation will vary from ethicist to ethicist. This variation, she argues, is not found in medicine and thus accounts for why we can have clinical expertise. Unlike the surgeon, who can say, “x is the right procedure to do at this time,” the clinical ethicist cannot say, “y is the right thing to do now.” The surgeon’s expertise rests on a set of agreed facts about the body and what outcomes determine a successful surgical intervention, that is, a surgeon’s expertise is epistemically uniform. But, the ethicist faces legitimate disagreement about what facts are relevant and what outcomes indicate moral action, for example, fewer bad consequences or fidelity to principles.

In Varelius’s (2008, 127–32) article, “Is ethics expertise possible?” he also compares moral expertise to scientific expertise. But, Varelius makes a point different from Gesang, Cowley, or Shalit. He argues that moral expertise is relevantly similar to scientific expertise and thus:

…unless we want to conclude that there is no scientific expertise either, we should allow that ethical expertise is possible, even though ethics is concerned with moral right and wrong. (Varelius, 2008, 128)

Although his point is different, the argument he has with those such as Cowley and Shalit is similar to the argument Cowley has with Gesang: how does moral expertise fare when compared to science or medicine? Because Varelius contends that moral expertise is similar to scientific expertise, he argues that it is, in principle, possible.

According to Varelius (2008, 128–9), moral expertise is similar to scientific expertise because, contrary to both Gesang and Cowley’s characterization, science is a value-laden enterprise. Citing post-positivist philosophers of science from Thomas Kuhn to Hilary Putnam, Varelius argues that theories and values affect everything from observations, to hypothesis selection, to data analysis. Moreover, he further argues that the kind of expertise that scientists provide is relevantly similar to the kind of expertise ethicists can provide (Varelius, 2008, 129–30). For instance, scientists can answer questions about the causes of natural phenomena and in doing so clarify, systematize, and extend one’s thinking about the natural world. Likewise, ethicists can clarify, systematize, and extend one’s thinking about the ethical world by explicating the theoretical and practical commitments of a particular moral stance.

III. SKEPTICISM OVER CLINICAL AND SCIENTIFIC EXPERTISE

How moral expertise compares to medical and scientific expertise matters in the debate over its possibility. Whether arguing for or against it, philosophers and others evoke medicine and science as gold standards to which moral expertise ought to conform. But, over the last few decades the epistemic value of medical and scientific expertise has been questioned. This skepticism is due to the growing recognition of the way that values and intuitions affect expert advice in these disciplines.

Medicine

In 1992, the evidence-based medicine working group published “Evidence-based medicine: A new approach to teaching the practice of medicine” (Evidence-Based Medicine Working Group, 1992, 2420–5). This article heralded the arrival of evidence-based medicine (EBM), a new paradigm for medical practice. This paradigm promised to “de-emphasize intuition, unsystematic clinical experience and pathophysiologic rationale as sufficient grounds for clinical decision making…” (Evidence-Based Medicine Working Group, 1992, 2420). In its place EBM would accentuate the use of evidence from clinical research.

The introduction of EBM is both prosaic and revolutionary. On the one hand, what doctor does not use evidence in his clinical decision-making (Grahame-Smith, 1995, 1126)? On the other, EBM was and remains an attempt to undermine clinicians’ expert authority (Charlton, 1997, 87–98). Indeed, in their 1992 article the Evidence-Based Medicine Working Group writes:

The new paradigm puts a much lower value on authority. The underlying belief is that physicians can gain the skills to make independent assessments of evidence and thus evaluate the credibility of opinions being offered by experts. (Evidence-Based Medicine Working Group, 1992, 2421)

Clinicians did not take such a denigration of their expertise lying down. In 1995, the Lancet ran an editorial arguing that EBM should build on, rather than de-emphasize, clinicians’ experience. In 1996, members of the Evidence-Based Medicine Working Group appeared to agree and responded in the British Medical Journal that EBM is the integration of clinical expertise with the best external evidence (Anonymous, 1995, 785; Sackett et al., 1996, 71–2).

But, EBM’s commitment to this integration has been repeatedly questioned. For instance, in an effort to clarify what counts as the “best external evidence,” hierarchies of evidence were developed. These hierarchies typically rank evidence according to the internal validity of their study design. The result is that systematic reviews and randomized controlled trials (RCTs) are typically ranked at the top with expert opinion ranked at the bottom. As Upshur, Van Den Kerkhof, and Goel (2001, 91) note, the result of these evidence hierarchies is that the original tenets of the EBM working group’s “manifesto” remain unaltered.

The original tenets of EBM explicitly aim to de-emphasize clinical intuition and experience, two important aspects of clinical expertise to which we might add a third: clinical judgment (Montgomery, 2005). By de-emphasizing these aspects of expertise, proponents of EBM seek to decrease the uncertainty of medicine in both diagnosis and treatment (Goldenberg, 2006, 2623).

Uncertainty in medicine arises from at least two different sources. First, medical diagnosis and treatment recommendations occur within the context of incomplete information. Patients describe what ails them in their own words, possibly leaving out important factors; clinicians can order diagnostic tests, but they can never order all the tests that may be relevant; and even seasoned clinicians may be inexperienced in the face of some complaints. Second, clinicians work within a surfeit of information. Patients give their history, providing relevant and irrelevant details; clinicians perform a physical exam, gathering relevant and irrelevant information; and clinicians may order diagnostic tests whose results may or may not be relevant to a diagnosis and treatment.

In the face of incomplete, but nonetheless abundant, information, clinical experience, values, and judgment come into play (Montgomery, 2005). Clinicians must use experience and values to bridge the lack of information, and they must use their judgment to determine what pieces of information are relevant to a particular case. The result is that different clinicians may diagnose and treat the same or similar patients differently; expertise is epistemically diverse. For instance, prior experience with patients may lead one clinician to make a different diagnosis than another clinician with a different set of patient experiences. Moreover, the values that a clinician holds can affect treatment recommendations, for example, with regard to the same set of symptoms some clinicians are reluctant to prescribe antibiotics, while others are more willing. Furthermore, as Kennedy (2013, 487–500) argues, how clinicians exercise their judgment affects their diagnostic ability. All of these differences help to explain the usefulness of second opinions.

EBM seeks to reduce some of the uncertainty of medicine by minimizing the effects of experience, values, and judgment. As the Evidence-Based Medicine Working Group note, the first assumption of the EBM paradigm is that interpretation of information derived from clinical experience and intuition may be misleading (Evidence-Based Medicine Working Group, 1992, 2421). In place of clinical experience and intuition, that is, expertise, systematic and unbiased evidence from epidemiologic studies will provide confidence (where previously we had uncertainty) in prognosis, the value of diagnostic tests, and the efficacy of treatment. Or as Maya Goldenberg (2006, 2623) has put the point:

Evidence-based practices are…enormously appealing in the age of moral pluralism; rather than relying on explicit values that are likely not shared by all, “the evidence” is proposed to adjudicate between competing claims.

Goldenberg (2006, 2623–4) goes on to criticize the idea that evidence can adjudicate between competing claims, arguing that such a notion presupposes, as Cowley put it, a “shared physical landscape,” when in fact post-positivist philosophers of science have convincingly argued that observations are theory laden and our theory choices are underdetermined by the data. Goldenberg is not alone in criticizing EBM and its ambitions. Most recently, Greenhalgh et al.’s (2014, g3725) article in The BMJ asks if EBM is a movement in crisis. One crisis point is that EBM stifles the development of clinical expertise. They argue that the de-emphasis on clinical experience and intuition combined with an emphasis on evidence summaries and clinical guidelines decreases tolerance of uncertainty and the ability to apply practical and ethical judgment to clinical cases (Greenhalgh et al., 2014, g3725).

These outcomes are problematic in part because the very use of evidence requires clinical judgment and thus clinical expertise. For instance, clinical guidelines are helpful, but clinicians need to use judgment in determining how to apply them; the published results of RCTs are valuable pieces of information, but clinicians need to use judgment to determine whether their particular patient is sufficiently similar to the trial population to benefit from the treatment. Greenhalgh et al. (2014, g3725) evoke the Dreyfus model of skill acquisition and note that the most advanced stages of this model are characterized by intuitive reasoning and the selective use of evidence, that is, advanced expertise. Ironically, they argue that by de-emphasizing clinical experience and intuition, EBM overlooks the evidence on expertise.

Science

Philosophers, sociologists, and others have been debating the epistemic status of expert scientific claims since at least the 1970s. Early in the debate, theses regarding the social construction of science suggested that scientific knowledge claims are no different in status from any other knowledge claim, while others argued that scientific knowledge claims have a special status that should compel acceptance (Douglas, 2009, 5–7). When moral philosophers and other theorists such as Gesang, Cowley, and Shalit use science or medical science as a gold standard to which moral expertise should conform, they assume that science does have special epistemic authority, that is, that scientific knowledge claims are more believable than other knowledge claims. It is this assumption that allows them to use it as a gold standard. When philosophers such as Varelius evoke science, they recognize that although science may be partially constructed, it still functions with epistemic authority, and thus it is still a gold standard despite the role that values play in its epistemic claims.

In Douglas’s (2009) book, Science, Policy and the Value-Free Ideal, she examines the role of science advisors, that is, those scientific experts who advise governments on policy related to science. The expertise required of science advisors closely resembles the expertise of clinicians and ethicists, in that science advisors must also apply their expert knowledge to make a recommendation to another person (Douglas, 2009, 134). It is thus useful to examine Douglas’s argument for the role that social and ethical values play in science advising.

Douglas (2009, 87–114) argues that social and ethical values are necessary to any science that has a public role, that is, any science that has a role in policy, medicine, technology, and so forth. The reason for this necessity requires a two-part explanation. First, we must understand why values play a role in science at all, and then we must understand why the values employed when science plays a public role are specifically social and ethical values. Although most philosophers of science accept that science involves the use of epistemic values such as consistency, accuracy, and fruitfulness, many have been less willing to accept that science requires the use of nonepistemic values, that is, social and ethical values.

In the previous section, we mentioned that scientific theories are underdetermined by the data. Put otherwise, empirical evidence is insufficient to determine what beliefs we should hold. Consider an example. In 2003, Toronto researchers published research that found that when prelingually deafened adults, that is, adults who were deafened before they learned to speak, receive cochlear implants, they report that their quality of life increases (Kaplan et al., 2003, 245–9). This evidence, however, underdetermines what we should believe regarding the role that hearing plays in a good quality of life and what kind of policy regarding cochlear implants we ought to pursue.

Although cochlear implants may improve self-reported quality of life, it is unclear whether this is because hearing makes one’s life better or because deaf individuals lack resources and social support to lead a good quality of life. Encouraging the funding of cochlear implants through a national or private insurance scheme may further reduce the resources and support needed to live a good quality of deaf life. On the other hand, withholding funding could thwart a good quality of life if hearing is part of such a life (McClimans, 2010, 74–5).

Because evidence underdetermines what we should believe, scientists must use their judgment to determine when they have evidence that is sufficiently strong to support a particular belief. Put another way, there is always an element of uncertainty in the use of scientific evidence. As Douglas (2009, 103) notes, this uncertainty is present at various points throughout a scientific study, from selecting standards of statistical significance, to the characterization of evidence throughout a study, to the interpretation of the evidence at the end of the study. This uncertainty is overcome only when scientists use their judgment—their expertise—to determine which standard, characterization, claim, or theory is indicated.

When science has a public role—when, for instance, a study has the potential to affect public policy or medical treatment options—then, Douglas argues, the use of expert judgment draws on social and ethical values. When science has the potential to affect others—and it very often does—then the values employed in using one’s judgment should be connected to an individual’s perception of what is at stake, should one make a mistake. Scientists ought to evaluate the social and ethical consequences of error (Douglas, 2009, 87). Douglas (2009, 50) quotes Richard Rudner in making this point:

…Obviously our decision regarding the evidence and respecting how strong is “strong enough,” is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis…. How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be. (Rudner, 1953, 2)

Veatch (2005, 213–7) takes a similar position. In his discussion of the policy recommendations made by the Committee to Assess the Safety and Efficacy of the Anthrax Vaccine, he argues that questions of “safety” and “effectiveness” go beyond the scientific data and require those on the committee to make evaluative judgments using ethical and other normative values. He argues that in determining the safety of the anthrax vaccine the committee needs to evaluate the relative importance of its benefits and negative side effects. But, how one evaluates this importance is in part a function of how one evaluates the importance of the activities that warrant protection from anthrax, for example, Middle Eastern Wars, the postal service, etc.

The seriousness of mistakenly accepting the anthrax vaccine as safe will depend in part on how one understands the legitimacy of a particular war or the necessity of the postal service. For instance, although the Committee noted that there was not enough data to draw conclusions of long-term side effects of the vaccine, they did not take this uncertainty to outweigh the benefits of the vaccine, given that they took the short-term side effects to be comparable to other vaccines (Veatch, 2005, 216–7). Another group of researchers concerned with the legitimacy of the war may be less concerned about how the short-term effects of the anthrax vaccine compare to other vaccines and emphasize the uncertainty of the long-term consequences.

This example and others like it illustrate the kind of disagreement that can affect science advising (Douglas, 2009, 149–50). By the 1970s, science advising was fully institutionalized as part of the US policy process, but the reality of expert disagreement undermined the idea that science would make policy making more rational. Indeed, the addition of science advising seemed to add a level of uncertainty and contention to an already complicated and contentious process (Douglas, 2009, 138). Federal agencies responded to this uncertainty not unlike those in the EBM movement: they attempted to formalize the process through which scientists provided expertise, thus relying as little as possible on the subjective judgment of scientists.

This formalization took place within the context of risk assessment. By the 1980s, scientists were increasingly asked to provide expertise on their assessment of risk in a variety of contexts. This demand inevitably requires judgment on the part of the scientific expert, since complete scientific information regarding risk in these contexts is lacking. Recognizing the lack of complete scientific information, yet wanting to minimize the role of social and ethical values in scientific expertise, the National Research Council recommended that federal agencies using expert advice should develop inference guidelines (Douglas, 2009, 142–3). Inference guidelines anticipate those areas in science where the data is incomplete and explicitly determine how scientists should proceed in the face of this uncertainty. For example, an inference guideline might direct scientists to assume that if a chemical causes cancer in rodents, it is likely to cause cancer in humans (Douglas, 2009, 143).

The use of inference guidelines has been controversial and difficult to implement in such a way that minimizes the use of values in scientific expertise. As Douglas (2009, 149) writes, a rigid inference guideline that infers human cancer from rat cancer ignores scientific evidence to the contrary, but a more flexible guideline that allows for the use of this evidence would require a case-specific judgment on the weight of evidence needed to overrule the inference from rat to human cancer. Such a judgment would resurrect social and ethical values, and thus undermine the very point of inference guidelines.

IV. EXPERTISE AND VALUES

Subjective judgment, a reliance on experience and the use of values, is part of clinical and scientific expertise. This is largely a result of the uncertainty inherent in these two disciplines. While this uncertainty has led to movements that attempt to minimize the judgment, experience, and values required to exercise expertise, as we have discussed, there are increasingly vocal critics who argue that these variables characterize expertise.

But if judgment, experience, and values are part of expertise, then how should we deal with disagreement and concerns regarding the erosion of value pluralism in a democratic society? Recall for Gesang (2010, 152, 158), strong expertise was identified by one’s ability to silence the opinions of others. If expertise in its strongest form involves judgment, experience, and values, then others should not be silenced in the face of expertise. Indeed, one of Shalit’s (1997, 27) concerns about moral expertise in the context of clinical ethics is its pretense to authority in the context of a liberal democracy. To the extent that expertise in science and medicine also involves value judgments, this criticism seems applicable to these disciplines also.

Douglas (2009, 155) addresses directly the concern over scientific expertise and democracy. She argues that in place of inference guidelines, scientists providing expertise ought to make the values on which they base their judgments more transparent. Doing so, she suggests, would maintain the integrity of science while also allowing for democratic accountability in policy making. Values, once made explicit, can be discussed and evaluated; understanding what scientists are willing to risk and why can help agency officials make better informed decisions. It can also help to keep them accountable, since it is then possible to point to a decision and the values that supported it and ask why such a decision was made. Moreover, scientists need not apply values to policy recommendations in isolation; the public can help to assist in making the evaluations necessary to judgments about risk. Indeed, as Douglas (2009, 157–64) discusses, the National Research Council has also suggested increased public involvement in making judgments regarding risk assessment.

The importance of making one’s values explicit has also been emphasized in clinical medicine, specifically in the context of the physician–patient relationship. As Emanuel and Emanuel (1992, 6) argue, physicians should help to articulate the values that underpin different clinical recommendations and discuss why they feel some of these values are more worthy than others. Part of the argument for this position rests on the fact that different values underpin different approaches to diagnosis and recommendations for treatment. When clinicians are explicit about the values that they endorse and why they endorse them, then patients are in a better position to evaluate their ethical and normative fit with a particular clinician. As in policy advising, when patients understand the values that motivate their clinician’s choices, they are able to understand better what their clinician is willing to risk and why.

Moreover, making values explicit in the physician–patient relationship can also help to prevent coercion. When clinicians present recommendations solely on the basis of medical evidence, patients’s reluctance or refusal to follow them can be construed as irrational. Allowing values to have a role in conversations about medical decision-making can provide a space for the kind of democratic pluralism Shalit and others wish to preserve. To be sure, in the process of making values explicit, clinicians (and others) may manipulate medical decision-making, that is, use a values-based discussion as cover for a different agenda. But, perhaps this concern is less a reason to abandon value-based discussions than it is a reason to institutionalize them.

Fulford (2013, 544) makes a similar claim in his defense of values-based practice. Values-based practice is an approach to medical decision-making that goes beyond the physician–patient relationship to work with complex and conflicting values throughout health care (Fulford, 2011, 976). The purpose of values-based practice is to achieve transparent and balanced decision-making among the values of those involved and to do so through a skills-based approach that emphasizes an explicit awareness of values (Fulford, 2013, 537). But, critics have argued that liberal democratic approaches such as this one, which claims to “respect” the “diversity” of the values of those involved and aim at “balanced” decision-making, too quickly gloss over the way in which power operates (Brecher, 2011, 997). The reality is that while the language of respect, diversity, and balance is used, it is the values of “whoever holds effective power” that are likely to prevail (Brecher, 2011, 997).

In his response, Fulford (2013, 544) argues that while value-based practice is not immune to this problem, such an outcome would be a misuse of the practice. Moreover, one way that value-based practice attempts to protect against this particular misuse is through the dissemination of a training program for front line staff regarding awareness of values, reasoning about values, knowledge of values, and communication skills, for example, eliciting and understanding values and resolving conflicts.

V. MORAL EXPERTISE AND CLINICAL ETHICS

These two visions of the scientific and clinical expert suggest that making values explicit and supporting programs to expand involvement in these discussions, for example, front line hospital staff and the non-expert public, can help to safeguard expertise against the erosion of value pluralism and allow us to understand better the basis on which experts disagree.

But, how do these visions of scientific and medical expertise affect our understanding of the possibility of moral expertise, in particular the possibility of moral expertise as exercised by clinical ethicists?

The uncertainty that drives the use of judgment and values in scientific and clinical expertise is also present in morality: we are often uncertain about what we ought morally to do. As in medicine and science, this uncertainty comes from incomplete information and the need to judge what aspects of a situation are and are not (morally) relevant. For instance, different normative theories compete with one another and we have incomplete information about which one is better (Frey, 1978, 50–1). Moreover, in part because moral theories compete with one another, the relevant moral features of a situation are also uncertain, that is, should we focus on the consequences of action? On principle? Does it matter that one of the people involved is my mother and not a stranger? Despite this uncertainty, we are able to make moral judgments by relying on our experience and values. To be sure, Gesang (2010, 157) is correct: our experiences are in a dialectic with moral theory and our values. But it is only through this dialectic that we can come to decide how we ought to act.

Moral expertise is often taken to consist in knowing what one ought morally to do. Some, such as Frey (1978, 50) and Gesang (2010, 153–9), have argued that whether or not one accepts the possibility of moral expertise turns on one’s ability to justify the answer to the theoretical question “What makes right acts right?” In other words, the possibility of moral expertise requires us to ask whether or not some moral judgments are consistently better than others. But, whether or not a moral judgment is a good one depends on the normative ethical theory that one uses to illuminate the relevant considerations in a situation. Thus, moral expertise turns on whether or not we can justify the use of a particular normative theory as more adequate or correct than others. Our discussion of expertise in medicine and science, however, suggests that securing such a theoretical platform is not necessary for expertise.

Recall that one of the areas where scientists encounter uncertainty is in the characterization and interpretation of evidence, and this requires them to draw on their experience and values to make a judgment about what theoretical framework is required. Different experts will choose differently. In medicine, clinicians encounter uncertainty in the characterization and interpretation of patient histories, physical symptoms, and diagnostic test results. Clinicians, like scientists, must draw on their experience and values to make these judgments. Part of the experience and values on which they will draw comes from their training. As such, osteopathic doctors, for example, may characterize and interpret their patients differently than allopathic doctors, given their different theoretical backgrounds.

The gold standards of expertise, that is, medicine and science, are rife with uncertainty and are characterized by a reliance on experience, values, and judgment, which can lead to differences in theoretical orientation. We have argued that when clinical and scientific experts make recommendations, these recommendations should be qualified with an understanding of the values and experiences that justify them. Thus, medical and scientific expertise is less about “getting it right” or silencing non-experts than it is about providing recommendations that make explicit the values and/or experiences that motivate them. Philosophers, and other theorists who question the legitimacy of clinical ethicists because of the multiplicity of legitimate values and moral intuitions that affect what theory one applies to a particular moral decision and how one applies it, are misguided. As we have shown, expertise is consistent with the multiplicity of experiences and values, which can lead to very different judgments by experts in the same field regarding what one should (morally, practically, strategically) do.

To be sure, there are other concerns that one might have regarding moral expertise and the legitimacy of clinical ethics, which we have not considered in this paper. For instance, if, as we have argued, expertise is epistemically diverse, then what is the basis on which a clinical ethicist is considered a moral expert? In other words, over what area of knowledge or skills do clinical ethicists rightfully claim mastery (e.g., Rasmussen, 2011, 649–61)? Complete answers to this question go beyond the scope of this paper, but here there are legitimate differences between medical and scientific expertise compared to the moral expertise of a clinical ethicist. Although we have standards for the knowledge and skill sets that clinicians and scientists must master, there is disagreement regarding what characterizes the role of a clinical ethicist, and thus what body of knowledge they need to master (e.g., Aulisio and Arnold, 2008, 417–24; Agich, 2005, 7–24).

Nonetheless, to the extent that the debate about moral expertise and the legitimacy of clinical ethics turns on the provision of recommendations in the context of competing moral theories and a wide range of reasonable values and moral intuitions, we argue that moral expertise is possible, and clinical ethics is legitimate insofar as experts make clear the values and experiences that motivate their judgments. As moral experts, clinical ethicists do not offer decisive opinions, but then neither do clinicians or scientists. To think otherwise is to misunderstand the nature of expertise.

VI. CONCLUSION

We have argued that the gold standard models of expertise in medicine and science do not presuppose epistemic conformity. Expert recommendations in medicine and science are uncertain and characterized by experience, values, and judgment. As a result, when making expert recommendations, clinicians and scientists should acknowledge the values and experiences that shape the construction and use of the body of knowledge that constitutes their expertise. Thus, the presence of competing moral theories and the wide range of legitimate values and moral intuitions that characterize moral decision-making are not fatal in the attempt to characterize a concept of moral expertise. This said, there are other concerns about the possibility of a moral expert qua clinical ethicist that are not addressed in this article: namely, over what area of knowledge or skills do clinical ethicists rightfully claim mastery? This is an important question and should continue to be pursued.

REFERENCES

  1. Agich G. J. 2005. What kind of doing is clinical ethics? Theoretical Medicine and Bioethics 26:7–24. [DOI] [PubMed] [Google Scholar]
  2. Anonymous. . 1995. Evidence-based medicine, in its place. Lancet 346:785. [PubMed] [Google Scholar]
  3. Archard D. 2011. Why moral philosophers are not and should not be moral experts. Bioethics 25:119–27. [DOI] [PubMed] [Google Scholar]
  4. Aulisio M. P. and Arnold R. M.. 2008. Role of the ethics committee: Helping to address value conflicts or uncertainties. Chest 134:417–24. [DOI] [PubMed] [Google Scholar]
  5. Brecher B. 2011. Which values? And whose? A reply to Fulford. Journal of Evaluation in Clinical Practice 17:996–8. [DOI] [PubMed] [Google Scholar]
  6. Charlton B. G. 1997. Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice 3:87–98. [DOI] [PubMed] [Google Scholar]
  7. Cholbi M. 2007. Moral expertise and the credentials problem. Ethical Theory and Moral Practice 10:323–34. [Google Scholar]
  8. Cowley C. 2012. Expertise, wisdom and moral philosophers: A response to Gesang. Bioethics 26:337–42. [DOI] [PubMed] [Google Scholar]
  9. Douglas H. 2009. Science, Policy, and the Value-Free Ideal. 1st ed Pittsburgh, PA: University of Pittsburgh Press. [Google Scholar]
  10. Emanuel E. J. and Emanuel L.. 1992. Four models of the physician–patient relationship. Journal of the American Medical Association 267:2221–6. [PubMed] [Google Scholar]
  11. Evidence-Based Medicine Working Group. . 1992. Evidence-based medicine. A new approach to teaching the practice of medicine. The Journal of the American Medical Association 268:2420–5. [DOI] [PubMed] [Google Scholar]
  12. Frey R. G. 1978. Moral experts. Personalist 59:47–52. [Google Scholar]
  13. Fulford K. W. M. 2011. The value of evidence and evidence of values: Bringing together values-based and evidence-based practice in policy and service development in mental health. Journal of Evaluation in Clinical Practice 17:976–87. [DOI] [PubMed] [Google Scholar]
  14. Fulford K. W. M. 2013. Values-based practice: Fulford’s dangerous idea. Journal of Evaluation in Clinical Practice 19:537–46. [DOI] [PubMed] [Google Scholar]
  15. Gesang B. 2010. Are moral philosophers moral experts? Bioethics 24:153–9. [DOI] [PubMed] [Google Scholar]
  16. Goldenberg M. J. 2006. On evidence and evidence-based medicine: Lessons from the philosophy of science. Social Science & Medicine 62:2621–32. [DOI] [PubMed] [Google Scholar]
  17. Grahame-Smith D. 1995. Evidence based medicine: Socratic dissent. BMJ 310:1126–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Greenhalgh T. Howick J. Maskrey N., and the Evidence Based Medicine Renaissance Group. 2014. Evidence based medicine: A movement in crisis? BMJ 348:g3725. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Kaplan D. M., Shipp D. B., Chen J. M., Ng A. H. C., Nedzelski J. M. 2003. Early-deafened adult cochlear implant users: Assessment of outcomes. The Journal of Otolaryngology 32:245–9. [DOI] [PubMed] [Google Scholar]
  20. Kennedy A. 2013. Differential dagnosis and the suspension of judgment. Journal of Medicine and Philosophy 38:487–500. [DOI] [PubMed] [Google Scholar]
  21. Longino H. E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press. [Google Scholar]
  22. McClimans L. 2010. Towards self-determination in quality of life research: A dialogic approach. Medicine, Health Care, and Philosophy 13:67–76. [DOI] [PubMed] [Google Scholar]
  23. Montgomery K. 2005. How Doctors Think: Clinical Judgment and the Practice of Medicine.1st ed New York: Oxford University Press. [Google Scholar]
  24. Rasmussen L. M. 2011. An ethics expertise for clinical ethics consultation. Journal of Law, Medicine & Ethics 39:649–61. [DOI] [PubMed] [Google Scholar]
  25. Rudner R. 1953. The scientist qua scientist makes value judgments. Philosophy of Science 20:1–6. [Google Scholar]
  26. Sackett D. L., Muir Gray J. A., Haynes R. B., Rosenberg W. M. C., Richardson W. S. 1996. Evidence based medicine: What it is and what it isn’t. BMJ 312:71–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Shalit R. 1997. When we were philosopher kings: The rise of the medical ethicist. New Republic 216:24–8. [PubMed] [Google Scholar]
  28. Upshur R., Van Den Kerkhof E. G., Goel V. 2001. Meaning and measurement: An inclusive model of evidence in health care. Journal of Evaluation in Clinical Practice 7:91–6. [DOI] [PubMed] [Google Scholar]
  29. Varelius J. 2008. Is ethical expertise possible? Medicine, Health Care, and Philosophy 11: 127–32. [DOI] [PubMed] [Google Scholar]
  30. Veatch R. M. 2005. The roles of scientific and normative expertise in public policy formation: The anthrax vaccine case. In Ethics Expertise: History, Contemporary Perspectives, and Applications, ed. Rasmussen L., 211–25. Dordrecht, the Netherlands: Springer. [Google Scholar]

Articles from The Journal of Medicine and Philosophy are provided here courtesy of Oxford University Press

RESOURCES