Abstract
In his book, For the Common Good: Philosophical Foundations of Research Ethics, Alex John London argues that the current framework for human research ethics and oversight is an assortment of rules, procedures, and guidelines built upon mistaken assumptions, policies, and practices that create spurious dilemmas and serious moral failings and that his theory can fix these problems by placing human participant research on a solid philosophical foundation. London argues that human participant research is a social activity guided by principles of justice in which free and equal individuals work together to promote the common good. In this review essay, I summarize, analyze, and criticize London’s approach to the foundations of human research ethics. Although I criticize London’s theory of human research ethics for being excessively idealistic, I think his book succeeds in showing why it is necessary to expand the scope of human research ethics beyond its current confines to adequately deal with questions of intranational and international justice.
Introduction
Ethical guidance and oversight of research with human participants has developed largely in reaction to historical abuses of human beings, such as the Nazi research on concentration camp prisoners, the Tuskegee Syphilis Study, the Willowbrook Hepatitis Experiments, and the Jewish Chronic Disease Hospital Study. Most of these episodes involved harming and exploiting imprisoned, institutionalized, socioeconomically disadvantaged, or otherwise vulnerable people to achieve scientific or social objectives. The sociopolitical response to these abuses, as well as more recently discovered ones (such as the US government’s secret human radiation experiments), has been to adopt ethical and legal standards to protect the rights and welfare of human research participants and restore and promote trust in the research enterprise. These standards consist of an assortment ethical codes and guidelines (such as the Nuremberg Code and the Declaration of Helsinki), regulations (such as the US Common Rule and Food and Drug Administration regulations), influential government documents (such as The Belmont Report), and legal cases (such as Grimes vs. Kennedy Krieger Institute). Because these standards have arisen in response to diverse social, political, economic, and institutional pressures and conditions, they lack an overarching unity (Wertheimer 2010; Resnik 2018).
Investigators and research administrators who have learned to how to operate within this system often view the absence of a unifying foundation for human research ethics and oversight to be an annoyance because it can generate bureaucratic decisions that appear inconsistent, arbitrary, or obstructionist, but they still find it to be tolerable because it permits science to move forward with the requisite social and legal approval. Some philosophers and ethicists, such as Alan Wertheimer (2010), argue that human research ethics does not require a foundation. Others worry that lack of a unifying foundation for human research ethics is a serious flaw that needs to be rectified. Alex John London falls into the latter camp.
In his book, For the Common Good: Philosophical Foundations of Research Ethics, London (2022) argues that the current framework for human research ethics and oversight is built upon mistaken assumptions, policies, and practices that create spurious dilemmas and serious moral failings1 and that his theory can fix these problems and place human participant research on a solid philosophical foundation. According to London, “The philosophical foundations of research ethics are underdeveloped and riven with fault lines that create uncertainty, ambiguity, and disagreement. The goal of this book is to rethink these foundations and to articulate an alternative in which research is recognized as a collaborative social activity between free and equal persons for the purpose of producing an important social good (London 2022, p.3).”
London’s book is the culmination of arguments and ideas he has developed in previously published papers dealing with topics such as the standard of care in international research, equipoise and randomization in clinical trials, risk assessment, prospective review of research, and the social value of research (e.g. London 2000, 2001, 2005, 2006, 2007, 2012, 2019; London and Zollman 2017). The resulting crystallization of thought in this book gives the reader a better understanding of where London stands on these issues and what motivates his viewpoint. The interpretation of London’s work that comes through in his papers and but even more clearly in his book is that he is an egalitarian in the tradition of John Rawls (1971, 1993, 2001), although, as we shall see below, he diverges from Rawls on some key points concerning international justice.
London seeks to achieve several objectives in his book: 1) to identify and diagnose some problems with the current framework for human research ethics2 and oversight; 2) to critically engage with other authors on various philosophical and moral issues relating to human research; 3) to describe and defend his own philosophical foundation for human research ethics; and 4) to offer some proposals for reforming the current framework. Although London is highly critical of the current framework, he is not interested in overthrowing it but seeks to “expand both the scope of problems that are seen as falling within the purview of the field [of research ethics] and the range of actors whose conduct should be the subject of ethical assessment (London 2022, p. xvii-xviii).”
I will not explore the intricacies of London’s philosophical exchanges with his opponents in this review essay, since these discussions are somewhat abstruse and may not be of interest to the general reader. I will also not spend much time discussing London’s reform proposals, since these are not very well developed in his book. Instead, I will focus on London’s diagnosis of some of the problems with the current framework and the treatment he offers for them.
London’s Diagnosis of the Problems with the Current Framework
London argues that spurious dilemmas and moral failings arise in research with human participants because the current framework is not founded on a comprehensive moral theory that gives adequate attention to larger questions of justice. According to London, the current framework for human research ethics and oversight (or what he calls “orthodox research ethics”), consists of assumptions, policies, and practices that lead to the spurious dilemmas and moral failings. Some of these assumptions, policies, and practices are: (see London 2022, Part I, pp. 3-113):
Human research ethics is primarily concerned with relationships and interactions among various parties (i.e. individuals and organizations) directly involved in research, such as participants, researchers, academic institutions, and private and public sponsors.
Parties have interests and agendas (e.g. career advancement, profit, and knowledge and product development) that need to be controlled to prevent abuses of human participants.
The chief mechanisms for control are legal and ethical rules that protect the rights and welfare of human participants.
The rules are inherently protectionist and paternalistic because they restrict freedom to prevent people from harming others or themselves.
The locus for addressing and enforcing ethical and legal rules is the Institutional Review Board (IRB) or Research Ethics Board (REB), which is a local committee composed of scientists and laypeople that reviews and oversees human studies to ensure that they comply with ethical, legal, and community standards.
Question of social value rarely arise in IRBs discussions because it is generally assumed that research is socially valuable if it is likely to generate knowledge.
Justice, to the extent that it is considered at all, is minimalistic and procedural and does not address larger questions related to the social and economic impacts of the research enterprise or the social and economic conditions under which research takes place.
Moral issues in research that extend beyond relationships among parties directly involved in the conduct of research, such as the amount of evidence needed for a regulatory agency to approve a new drug, public funding of research, or the nature and scope of intellectual property protections, are dealt with by the larger social, political, and legal system and are outside the purview of the IRB.
I will briefly discuss some spurious dilemmas and moral failings London thinks this framework creates. The first relates to the patient/participant-clinical investigator relationship (see London 20022, Chapters 6). One of the most deeply held assumptions in human research ethics is that there is an inherent ethical dilemma in clinical research arising from the investigator’s conflicting duties to patients and to science. The investigator has a fiduciary duty to act in the patient’s best interests by recommending and applying therapies that promote the patient’s health, but the investigator also has duties to conduct research in a manner that is likely to produce generalizable knowledge that benefits other patients and society (Resnik 2009a). The dilemma manifests itself in various controversies in clinical research ethics, such as whether it is acceptable to randomly assign patients to different treatment arms of controlled trial (e.g. placebo or no treatment vs. experimental or standard care vs. experimental), give patients placebos in clinical trials, or perform risky research procedures on patients (such as tissue biopsies) that are not in their best medical interests.
The standard approach to such dilemmas is to say that they can be solved by means of obtaining valid informed consent from patient/participants and ensuring that therapeutic interventions are in equipoise prior to the initiation of a randomized, controlled trial (Resnik 2009a). Consent provides a moral justification for the clinician to act in ways that are not in the patient/participant’s best interests (Miller and Brody 2002) and equipoise, i.e. the idea that the clinical community does not know which treatment is better prior to the initiation of a clinical trial (Freedman 1987), justifies randomly assigning treatment in a clinical trial because it is not known which treatment is in the patient’s best interests.
London argues that the standard approach to these dilemmas is mistaken because it is based on false assumptions (or dogmas) concerning the nature of patient/participant-clinical investigator relationship. The first dogma is that these ethical dilemmas arise because the clinical investigator has conflicting duties (or role obligations) to the patient and to science, and that the patient’s best interests are narrowly construed as personal interests, rather than interests in promoting the common good. If one assumes that patients who agree to participate in research are interested in engaging in an activity that promotes the basic interests of all members of society, then clinical investigators do not necessarily act against their patients’ interests when they involve them in research because they are both working together for the common good. Hence, role obligations do not generate inherent ethical conflicts because one’s duties as a clinician are consonant with one’s duties as a scientist (London 2022, pp. 239-242). The second dogma is that research is a utilitarian endeavor that would sacrifice participants’ rights and welfare for the advancement of knowledge if ethical constraints were not in place. If one views research, instead, as a cooperative social activity in which parties with different interests, talents, and abilities pursue a common social good, then participants can be assured that their rights and welfare will be respected and protected (London 2022, pp. 247-252). Therefore, the idea that clinical investigators have conflicting moral duties based on role-obligations rests on mistaken assumptions that create illusory dilemmas.
The moral failings of the current system have to do with its inadequate attention to larger issues of justice in research. Questions of justice arise when we ask whether actions, decisions, policies, institutions, or socioeconomic conditions are fair or equitable. Philosophers and political theorists have distinguished between two types of questions relating to justice: a) distributive justice, i.e. whether a distribution of benefits (such as wealth, opportunities, or health) and burdens (such as poverty, deprivation, and disease) is fair or equitable; and b) procedural justice, i.e. whether the procedures or processes that distribute benefits and burdens are fair or equitable (Rawls 1971). In human research ethics, whether the selection of research participants involved in a study is fair or equitable is a distributive question, and whether the procedures for recruiting and enrolling participants are fair is a procedural one.
London argues that the current framework for research ethics and oversight does not adequately deal with larger (i.e. social, economic, and political) issues of justice because it employs a minimalistic and procedural concept of justice. Although the principle of justice stated in The Belmont Report (National Commission 1979) says that the benefits and burdens of research should be distributed fairly, it has very little of substance to say about what makes a distribution fair and emphasizes preventing vulnerable populations from being exploited (London 2022, pp. 27-33). While recognizing that vulnerable populations need to be protected from exploitation was an important development in research ethics, it had the effect of discouraging the inclusion of certain populations in research, such as children, mentally disabled adults, prisoners, and women who were or could become pregnant, which created gaps in scientific knowledge of human health that adversely impacted these populations (Mastroianni & Kahn 2001).
The Common Rule (Department of Health and Human Services 2017) requires that selection of subjects is equitable and that additional protections are in place for subjects who may be vulnerable to coercion or undue influence, but it does not define the term “equitable” and has nothing at all to say about justice. In fact, the Common Rule discourages IRBs from delving into larger issues of justice when evaluating the reasonableness (or justification) of risks, since IRBs should not consider the “possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility (Department of Health and Human Services 2017 at 45 CFR 46.111a).”
Other important documents, such as the Declaration of Helsinki (World Medical Association 2013) and the guidelines from the Council for the International Organizations of Medical Sciences (2016) offer some useful recommendations concerning justice in human participant research, but they focus more on practical guidance for researchers, IRBs, and sponsors, and do not include substantive analyses of justice.
A consequence of the lack of attention to justice in the current framework for human research ethics and oversight is that IRBs tend to focus on matters relating to procedural fairness, such as ensuring the consent documents are readable and understandable and minimizing the risk of coercion or undue influence during the consent process, but they do not address larger questions of justice. Since these larger questions are not adequately addressed, they give rise to difficult ethical dilemmas for IRBs, institutions, and sponsors, such as:
Is it ethical to conduct research in a low-income country if the medical benefits of the research will accrue primarily to people living in high-income countries?
Is it ethical to include a placebo-control group in a clinical trial in a low-income country if the same study would be unethical to conduct in a high-income country because an effective treatment is available in the high-income country?
Is it ethical to conduct a clinical trial in setting where most of the participants lack access to health care without offering to provide them with ancillary care (i.e. care over and above what is needed to achieve the scientific aims of the study)?
Is it ethical to conduct a Phase IV clinical trial of an approved drug (or post-marketing study) if the benefits of the study are likely to accrue primarily to manufacturer of the drug and not to society?
London discusses some solutions to these problems of justice which have been proposed by professional organizations (such the World Medical Association and Council for the International Organizations of Medical Sciences) and scholars and scientists (such as Participants in the 2001 Conference on Ethical Aspects of Research in Developing Countries 2002 and Wertheimer 2010), but he argues that their solutions are suboptimal because they do not adequately address larger issues of justice or they address these issues in an unprincipled and inconsistent way.
London’s Solution: Egalitarian Research Ethics
London argues that his approach to the philosophical foundations of human participant research provides satisfactory solutions to the problems he has identified in the current framework. London’s defense of his approach (or theory) occurs at various places in the book as he examines different problems with the current framework and critiques arguments and solutions offered by other scholars. So, what is London’s theory? Here are some salient quotes:
[R]esearch is a scheme of social cooperation that serves a public purpose grounded in considerations of justice. One such consideration of justice concerns the claims that community members have on the goals and ends that are advanced by the research enterprise. Following the egalitarian research imperative, the public purpose of research is to generate the knowledge necessary to bridge gaps in the capacity of the basic social institutions of a community— such as its system of public health and clinical medicine— to safeguard and advance the basic interests of that community’s members…participants [must] have credible social assurance that in taking on the purpose of advancing the common good they will not be subject to arbitrary treatment, including antipathy or abuse, exploitation, domination, or other forms of unfair treatment…in producing socially valuable information this scheme of social cooperation must respect the status of stakeholders—including study participants—as free and equal
(London 2022, pp. 251-252).
It [the research imperative] includes investing social resources, founding institutions, and establishing the rules and norms that are necessary to promote scientific research across the full lifecycle of knowledge development and deployment. It also includes the use of social authority to align the incentives of a wide range of actors who produce health-related information with the common good. Intellectual property laws, patent protections, the evidentiary thresholds necessary to secure regulatory approval, and the scope of the indication for which interventions can be marketed and sold are a few examples of policy decisions that shape the incentives of funding agencies, private sector firms, researchers, regulators, and other actors. Because these activities involve the exercise of state authority and because these decisions impact which questions are likely to be investigated in research and whether gaps in the ability of basic social institutions to advance the basic interests of community members are widened or closed, they implicate questions of justice and must be justifiable to community members as advancing the common good
(London 2022, p. 153).
The lesson to learn from recent debates about the ethics of international research is not that we need to purge international frameworks of appeals to requirements that are grounded in justice and that implicate a wider range of stakeholders. It is that we need to recognize justice as the first virtue of social institutions, acknowledge that research with humans is a scheme of social cooperation involving a wide range of stakeholders that both calls into action and feeds into important social institutions, and we need to hold both domestic and international research to the requirements of the egalitarian research imperative. I refer to the resulting view as the human development approach to international research
(London 2022, p. 375).
The human development approach treats justice as fundamentally concerned with the basic social structures of a society and whether they work to secure for all community members the fair value of their basic human capacities…It also recognizes, however, that in the nonideal world in which we live, the basic social institutions of most communities fall short of the requirements of justice. This shortfall is the motivation for a larger project of human development that takes these basic social structures as its focus
(London 2022, p 379).
Those who are familiar with the work of Rawls can see that London’s approach to human research ethics has much in common with Rawls’ views on justice. Indeed, London favorably cites Rawls 39 times in his book. Since some readers of this essay may not be familiar with Rawls’ views on justice, it will be useful to describe them briefly here.
In his book Theory of Justice, Rawls (1971) provides a philosophical defense of two principles of justice based on the notion of a social contract or what he calls the Original Position. In the Original Position, rational agents convene for the purpose forming a system of mutually beneficial social cooperation and adopting rules that will govern that society. The rules will structure the basic institutions of society, such as legal, economic, political, and educational systems. To ensure that the rules adopted by these rational agents are fair and impartial, Rawls imagines that the agents are behind a Veil of Ignorance, which means that the do not know who they are in that society. They do not know their race, gender, income level, age, and so on. Rawls argues that the rational agents in the Original Position would adopt two principles of justice that govern the distribution things that any person would need to have a fulfilling life, which he calls primary goods. The first principle is that moral, legal, and political rights and liberties should be distributed equally, and the second principle is that socioeconomic goods (such as income and wealth) can be distributed unequally only if there is equality of opportunity in society and the distribution is in the interests of the least advantaged members of society (Rawls 1971). As one can see from this brief sketch, Rawls’ theory is contractarian, because it justifies principles of justice in terms of a social contract; it is egalitarian because it treats people as morally, legally, and politically equal and mandates equality of opportunity; and it is idealistic because the social contract is a hypothetical situation that has never occurred.
With this understanding of Rawlsian theory in mind, we can see that London’s approach is also contractarian because it treats human participant research as a collaborative enterprise involving members of a community to produce knowledge and applications of knowledge (such as medical treatments and public health interventions) that serve the common good. The common good is equated not with the good of society in general or the aggregation of community members’ personal interests but with the basic or generic interests shared by all members of the community. Basic interests are things that anyone would need to have a fulfilling life, such as wealth, health, opportunities, social relationships, food, shelter, and so on (London 2022, p.133).
London’s approach is egalitarian because it treats members of the community as free and equal. London interprets equality to include moral, legal, and political equality, which means that members of the community should have equal rights to be treated with dignity and respect, equal rights to legal due process, and equal rights to political participation. Equality also includes equality of opportunity (London 2022, pp. 149-152). London does not say much about what is involved in securing equality of opportunity, but since London is a Rawlsian, it is safe to assume that he believes some redistribution of wealth and other resources may be necessary to ensure that individuals from different socioeconomic, racial/ethnic, and cultural backgrounds have similar opportunities to lead fulfilling lives. Presumably, he would favor levying taxes to provide support for public education, health care, scientific research, and other public goods (London 2022, pp. 154-159).
London’s approach is idealistic because it is concerned with what justice would be under certain unrealistic assumptions; for example, that people cooperate to achieve common ends and that they treat each other as free and equal partners in this endeavor. London acknowledges that we live in a world that does not meet these ideal conditions, but he nevertheless believes that we can and should move society in that direction, which is also a Rawlsian idea.
Because London is taking a broad view of justice, his approach to justice in human participant research extends way beyond the traditional units of analysis in research ethics, i.e. participants, researchers, and the review board, and includes all stakeholders involved in the research enterprise, such as public and private sponsors, academic institutions, government agencies, scientific journals, and professional organizations. It also applies to laws, regulations, and policies that affect the production of knowledge and its applications, such as intellectual property protections, public research funding, medical product regulation, and health care systems (London 2022, p. 153). Widening the scope of ethical analysis allows London to address larger issues of justice related to the research enterprise, such as what types of studies are done, how new medical technologies are approved, and how medical goods and services are distributed.
It is also important to point out how London diverges from Rawls. In his book The Law of Peoples, Rawls (2001) developed a theory of international justice that was very different from his theory of national justice. Rawls argued that the principles of justice defended in A Theory of Justice (Rawls 1971) do not apply internationally because the principles are justified only within a system of social cooperation that is enforced by laws, that is, a government; and since there is no world government, there is no international justice. Rawls did recognize, however, that there are some rules of international politics. For example, nations should respect each other’s sovereignty and honor agreements they form with other nations. Although Rawls (2001) thought that nations have duties to render aid to other nations, he did not support international redistribution of wealth because this would require a world government, which he thought was neither realistic nor desirable.
London parts company with Rawls on questions of international justice. According to London, the system of social cooperation that forms the basis of justice is not limited by national boundaries, and the common good includes the good of all people in the world (London 2022, pp. 375-376). International justice requires that nations take steps to rectify historical wrongs they have inflicted on other nations, such as colonialism, slavery, and resource exploitation; and that richer nations transfer wealth to poorer ones (London 2022, p. 395). International research, on this view, is a means of promoting human wellbeing and development globally. Human participant research conducted in low-income countries must be subsumed under the larger mission of human development.
Critique of London’s View
In the remainder of this essay, I will offer two criticisms of London’s view.
My main criticism is that London’s view provides impractical solutions to human research ethics issues because it is an idealistic theory far removed from the economic, social, psychological, and political realities of the modern science. London’s ideal theory portrays various stakeholders involved in the research enterprise as working together to achieve the common good, but in the real world people act in ways that contravene pursuit of the common good because they have competing financial, personal, or political interests and motivations, such as career advancement and funding (for investigators), access to treatment and money (for participants), profit (for private research sponsors), and prestige and revenue enhancement (for academic institutions) (Resnik 2007). Government agencies that fund or regulate research sometimes cater to political demands that work against the common good (Resnik 2009).
The rules and policies that constitute the current system of research ethics and oversight have arisen as a pragmatic way of coping with the realities of modern research enterprise. The system allows research to move forward in a manner that protects widely recognized values, such as human rights and welfare, and promotes public trust (Resnik 2018). The rules constitutive of the system are protectionistic and paternalistic because they have been designed to prevent investigators, sponsors, and institutions from harming or exploiting participants and to ensure that research is acceptable within the local community and larger society.
The IRB plays an important role in this system because it administers the rules and serves as an intermediary between participants, investigators, sponsors, institutions, and communities. However, the scope of IRB review and oversight is limited to what is necessary to protect human participants and promote public trust. IRBs do not, for the most part, tackle larger issues of social and economic justice because doing so could paralyze research review since these issues are often so divisive. The IRB addresses only smaller issues of justice directly related to the study design or consent process, such as selection, recruitment, and enrollment of research participants (Resnik 2018).
Conflict of interest (COI) is an area of research review and oversight where London’s idealism runs headlong into the realities of modern research. COIs are ubiquitous in research because investigators, sponsors, and institutions often have interests that are at odds with the interests of participants and local communities. COIs—at the individual and institutional level—can bias judgment and decision-making and lead to violations of ethical and legal rules that govern research. COIs must be disclosed, managed, and sometimes prohibited to protect the interests of participants, local communities, science, and the public (Resnik 2007; Shamoo and Resnik 2022). London’s idealistic theory does not adequately deal with COIs because it treats participants, investigators, sponsors, and institutions as working together to promote the common good. Indeed, although London devotes considerable attention to conflicts of obligations or duty in his book, the phrase “conflict of interest” never appears in it, and the phrase “financial interests” appears only once. This, to me, is indicative of the idealism that runs throughout the book. COIs may not be a problem in an ideal world, but they clearly are in the real one.
London could respond to this criticism by arguing that he does not deny human research oversight should include rules for disclosing and managing COIs, since these rules would be part of the system of incentives and constraints that “align the parochial interests of these parties with the common good (London 2022, p. 166).” Moreover, he could argue that his theory is superior to minimalist approaches to research ethics because it addresses the legal, social, and economic structures and relationships that create COIs. London could use his theory to argue for various reforms to intellectual property laws or public funding of research, for example.
I will grant London this point but I would still maintain that focusing too much on these bigger questions of justice can be counterproductive. Society cannot afford to wait to conduct life-saving clinical studies or approve new medical products until the whole human research system is more equitable and just. IRBs cannot settle larger questions of social and economic justice related to research proposals before deciding whether to approve them. Research must be done in an imperfect world in a timely fashion. London anticipates this sort of objection to his book, but he does not find it to be compelling (London 2022, p. 418-419).
International research is another area where London’s idealism is out of touch with reality. London’s theory extends social cooperation in service of the common good to the entire world and views research as part of the larger mission of human development. While this all sounds good in theory—who could argue with making the world a better place—it has little to do with the way international research operates. Pharmaceutical companies outsource research to low-income countries so they can have access to low-cost labor, treatment naïve populations, and a lax regulatory environment, all of which helps them to earn a profit. While companies may undertake research projects with humanitarian goals, such as developing a malaria vaccine, these goals cannot interfere with the bottom line because companies have duties to shareholders (Resnik 2001). If local governments require companies to spend too much money on activities not directly related to research, such as providing ancillary care or making improvements in the health care infrastructure, they may take their business elsewhere (Resnik 2018). Investigators from industrialized nations conduct research in low-income countries to advance scientific knowledge and their own careers but not necessarily to benefit those countries. While most of these investigators probably believe that their research is likely to benefit people living in those countries and some of them may even work with local populations to address needs identified by those populations, they might refrain from conducting research in those countries if funders or local governments required them to meet stringent demands related to securing local benefits from their research. Even government research organizations with good intentions may not have the funding authorization to support activities not directly related to research, such as providing ancillary care or improving health care infrastructures. If they go beyond their legal mandate, they risk public backlash and loss of funding.
The upshot of all this is that London’s human development approach to international research will not work without a world government with the power to coordinate research regulations and rules in different countries, control the behavior of private companies and researchers, and transfer research funding and other resources from richer nations to poorer ones. If we try to make London’s theory work in the current political milieu, companies, investigators, and government research funders will opt out of research or take various measures to evade regulatory burdens. Since we do not now have a world government and are not likely to have one for the foreseeable future, London’s approach to international human research ethics is no more than a pipe dream. The best we can do at present is work toward pragmatic solutions to problems of international justice in human research ethics and make gradual progress on issues of equity and fairness (Resnik 2001; 2018).
My second criticism of London’s book is that it grounds human research ethics on a controversial egalitarian theory that may not be widely accepted among members of the public or scholars. Moral disagreement is a fact of modern life (Rawls 1993; Gutmann and Thompson 1998). Members of the public disagree, often passionately, about questions of morality, law, and politics. These disagreements run deeper than untutored opinion and are reflected in moral and political theorizing. Philosophers and political theorists have developed many different theories that could serve as a foundation for human research ethics, including utilitarianism, Kantianism, natural law theory, natural rights theory, libertarianism, communitarianism, virtue ethics, care ethics, and various forms of egalitarianism. There are also many different approaches to international justice including cosmopolitanism (London’s view), nationalism (Rawls’ view), and others. Given the diversity of public and scholarly opinion on these issues, why should we think the London’s theory is preferrable to alternative theories? It appears that London has not built his foundation for human research ethics on solid ground.
Many philosophers who write about applied ethics, such as Thomas Beauchamp and James Childress (2019), have responded to the fact of moral pluralism and disagreement not by insisting that they have developed the one “correct” moral theory but by defending a pragmatic approach to the foundations of ethics that borrows insights from different theories and traditions. In their Principles of Biomedical Ethics, now in its eighth edition, Beauchamp and Childress (2019) articulate and defend four principles for making ethical decisions in biomedicine—autonomy, beneficence, nonmaleficence, and justice—and apply these principles to various topics, such as informed consent, confidentiality, and medical assistance in dying. These principles are supported, in different ways, by conceptually diverse theories, such utilitarianism, Kantianism, and egalitarianism. It is no accident that the Belmont Report’s three principles of human research ethics bear a striking resemblance to Beauchamp and Childress’s principles of biomedical ethics, since Childress gave testimony to the National Commission that wrote the report and Beauchamp was a staff assistant for the Commission (Beauchamp 2005).
In my book on human research ethics, The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust (Resnik 2018), I defend an approach to human research ethics that includes Beauchamp and Childress’s four principles (with some slight modifications) and adds a fifth principle, the principle of trust. I argue that trust among various stakeholders, including participants, investigators, sponsors, institutions, government agencies, communities, the public, and IRBs, weaves together the ethical fabric of human research. Indeed, without a high degree of trust the whole system would unravel. I argue that the principle of trust compliments and supports the four other principles and can help resolve ethical dilemmas. For example, understanding which policy bests promote trust can help us decide whether there should be limits on the risks that healthy volunteers can be exposed to in research, whether it is acceptable to include a placebo control group in a clinical trial, whether ancillary care should be provided to human participants, and what counts as fair benefit sharing in research (Resnik 2018). My approach is pragmatic and pluralistic and grounded in a firm understanding of the history, sociology, psychology, economics, and politics of human participant research.
Conclusion
While I do not agree with London’s overall approach to foundations of human research ethics, I have learned a great deal from reading his book. The book has helped me to see familiar issues in a different way and has prompted me to rethink my own views. London succeeds in showing why it is necessary to expand the scope of human research ethics beyond its current confines to adequately deal with questions of intranational and international justice. He also succeeds in developing a rigorous and thoughtful approach to the foundations of human research ethics that is likely to stimulate further inquiry and debate. Based on these two accomplishments alone, London’s book For the Common Good can be regarded as a major contribution to the literature on human research ethics and its philosophical foundations.
Acknowledgements
This research was supported by the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or US government.
Footnotes
London does not use these exact terms to describe the problems with the current framework, but I think they clearly express what he intends to say. A dilemma is spurious if it is false or illusory, and he claims that many dilemmas in clinical research are based on false assumptions, so by implication, these dilemmas would be false. A moral failing is a failure to live up to some moral standard or value. I think London would say a moral failing of the current framework is that it pays inadequate attention to substantive questions of justice.
London uses the term “research ethics” but I will modify it with the word “human” because research ethics is a broad subject that includes human and animal research ethics, as well as ethics related to authorship, collaboration, peer review, publication, data integrity and management, record keeping, and mentoring (Shamoo and Resnik 2022).
References
- Beauchamp Thomas L. 2005. The origins and evolution of the Belmont Report. In: Childress James F, Meslin Eric M., and Shapiro Harold T. (eds.). Belmont Revisited: Ethical Principles for Research with Human Subjects. Washington, DC: Georgetown University Press: 12–27. [Google Scholar]
- Beauchamp Thomas L and Childress James. 2019. Principles of Biomedical Ethics, 8th ed. New York, NY: Oxford University Press. [Google Scholar]
- Council for the International Organizations of Medical Sciences. 2016. International Ethical Guidelines for Health-Related Research Involving Humans. Geneva: Council for the International Organizations of Medical Sciences. [Google Scholar]
- Department of Health and Human Services. 2017. Code of Federal Regulations, Title 45, Chapter 46: Protection of Human Subjects. [Google Scholar]
- Freedman Benjamin. 1987. Equipoise and the ethics of clinical research. New England Journal of Medicine 317: 141–145. [DOI] [PubMed] [Google Scholar]
- Gutmann Amy and Thompson Dennis. 1998. Democracy and Disagreement. Cambridge, MA: Harvard University Press. [Google Scholar]
- London Alex J. 2000. The ambiguity and the exigency: Clarifying “standard of care” arguments in international research. The Journal of Medicine and Philosophy 25(4): 379–397. [DOI] [PubMed] [Google Scholar]
- London Alex J. 2001. Equipoise and international human-subjects research. Bioethics 15(4): 312–332. [DOI] [PubMed] [Google Scholar]
- London Alex J, 2005. Justice and the human development approach to international research. Hastings Center Report, 35(1): 24–37. [PubMed] [Google Scholar]
- London Alex J, 2006. Reasonable risks in clinical research: A critique and a proposal for the integrative approach. Statistics in Medicine 25(17): 2869–2885. [DOI] [PubMed] [Google Scholar]
- London Alex J, 2007. Two dogmas of research ethics and the integrative approach to human-subjects research. The Journal of Medicine and Philosophy, 32(2): 99–116. [DOI] [PubMed] [Google Scholar]
- London Alex J, 2012. A non-paternalistic model of research ethics and oversight: Assessing the benefits of prospective review. The Journal of Law, Medicine & Ethics 40(4): 930–944. [DOI] [PubMed] [Google Scholar]
- London Alex J. 2019. Social value, clinical equipoise, and research in a public health emergency. Bioethics 33(3): 326–334. [DOI] [PubMed] [Google Scholar]
- London Alex J. 2022. For the Common Good: Philosophical Foundations of Research Ethics. New York, NY: Oxford University Press. [Google Scholar]
- London Alex J & Zollman Kevin J. 2010. Research at the auction block: Problems for the fair benefits approach to international research. Hastings Center Report 40(4): 34–45. [DOI] [PubMed] [Google Scholar]
- Mastroianni Anna C & Kahn Jeffrey P. 2001. Swinging on the pendulum. Shifting views of justice in human subjects research. Hastings Center Report 31 (3): 21–28. [PubMed] [Google Scholar]
- Miller Franklin G & Brody Howard. 2002. What makes placebo- controlled trials unethical? The American Journal of Bioethics 2(2): 3–9. [DOI] [PubMed] [Google Scholar]
- National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Available at: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html. [PubMed]
- Participants in the 2001 Conference on Ethical Aspects of Research in Developing Countries. Ethics. 2002. Fair benefits for research in developing countries. Science 298(5601):2133–2134. [DOI] [PubMed] [Google Scholar]
- Rawls John. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press. [Google Scholar]
- Rawls John. 1993. Political Liberalism. New York, NY: Columbia University Press. [Google Scholar]
- Rawls John. 2001. The Law of Peoples. Cambridge, MA: Harvard University Press. [Google Scholar]
- Resnik DB. 2007. The Price of Truth: How Money Affects the Norms of Science. New York, NY: Oxford University Press. [Google Scholar]
- Resnik David B. 2009a. The investigator-subject relationship: A contextual approach. Journal of Philosophy, Ethics, Humanities in Medicine 4: 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Resnik DB. 2009b. Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York, NY: Oxford University Press. [Google Scholar]
- Resnik David B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. [Google Scholar]
- Shamoo Adil Eand Resnik David B. 2022. Responsible Conduct of Research, 4th ed. New York, NY: Oxford University Press. [Google Scholar]
- Wertheimer Alan. 2010. Rethinking the Ethics of Clinical Research: Widening the Lens. New York, NY: Oxford University Press. [Google Scholar]
- World Medical Association. 2013. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects (2013 revision). Available at: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. Accessed: April 1, 2022.