Abstract
This essay analyzes the concept of public trust in science and offers some guidance for ethicists, scientists, and policymakers who use this idea defend ethical rules or policies pertaining to the conduct of research. While the notion the public trusts science makes sense in the abstract, it may not be sufficiently focused to support the various rules and policies that authors have tried to derive from it, because the public is not a uniform body with a common set of interests. Well-focused arguments that use public trust to support rules or policies for the conduct of research should specify a) which public is being referred to (e.g. the general public or a specific public, such as a particular community or group); b) what this public expects from scientists; c) how the rule or policy will ensure that these expectations are met; and d) why is it important to meet these expectations.
Keywords: public trust, science, ethics, policy, public expectations
Introduction
Public trust is a familiar buzzword in research ethics and policy. Numerous codes of professional conduct, reports from government agencies and scientific organizations, scholarly articles, editorials, monographs, and textbooks cite the need to promote public trust in science as a reason for developing or revising ethical standards, ensuring compliance with the law, overseeing research activities, educating the public about science, and teaching students and trainees about the responsible conduct of research (American Society for Biochemistry and Molecular Biology 2010, American Public Health Association 2010, Institute of Medicine 2001, Association of American Medical Colleges 2001, Committee on Science, Engineering and Public Policy 2009, Obama 2009, National Institutes of Health 2010, Schroeder et al 1989, Haerlin and Parr 1989, Alberts and Shine 1994, Kass et al 1996, Haerlin and Parr 1999, DeAngelis 2000, Gureny and Sass 2001, Kennedy 2004, Mastroianni 2008, Yarborough and Sharp 2009, Shamoo and Resnik 2009).
The idea that it is important to promote public trust in scientific research has been used by so many different authors in so many different contexts that it is in danger of becoming a platitude. Even worse, overuse of this concept may lead to ambiguity. Everyone seems to be in favor of promoting public trust in research, but what does this mean? Like “integrity,” “ethics,” and “human dignity,” the words “public trust,” sound good and evoke strong emotions, but they may be difficult to pin down when examined closely. This essay will analyze the concept of public trust in science and offer some guidance for ethicists, scientists, and policymakers who would use this idea to defend ethical rules or policies related to the conduct of research.
Some readers may regard an analysis of public trust in research as superfluous, since the slogan seems to achieve its rhetorical objectives quite well. If people respond to arguments that appeal to the importance of promoting public trust in scientific research, then why do we need to say anything more about this topic? There are three good reasons for probing public trust in science in greater depth. First, it is important to have a better understanding of words and phrases used in scholarly and public debates, even when we think we know what they mean. Clarity is a virtue. Second, if people have different interpretations of public trust in scientific research, it is possible that they may be using the phrase to support contradictory policies or recommendations. Lack of clarity can lead to inconsistency in thought and action (Rosenberg 1995). Third, the published literature does not contain a careful analysis of public trust in research. Whitbeck (1995) has written cogently about trust in research, but she does not focus on public trust. Several empirical studies (Hall et al 2006, Lind et al 2007, McDonald et al 2008) have provided some useful information about trust between research subjects and investigators, but again, these do not focus specifically on public trust in science.
What is Trust?
Trust is a complex concept with different meanings and nuances. It is safe to say that there is no universally agreed upon analysis of trust, although scholars from various disciplines, including philosophy, social psychology, economics, law, and political science, have written extensively about it (Baier 1986, Gambetta 1998, Fukuyama 1995, Blomqvist 1997, Govier 1997, Hardin 2006). The following is a brief summary of some of the key insights from this substantial literature.
First, trust is a relationship between or among people (Blomqvist 1997, Govier 1997). Trust may be explicit, for example, when people make promises or form contracts, or implicit, for example, when motorists enter a traffic light (Tullber 2008, Blomqvist 1997, Becker 1996). Trust among people can be concrete, such as trust in a particular doctor, or abstract, such as trust in the medical profession (Becker 1996). Though trust applies primarily to relationship between individuals, trust also applies to relationships between individuals and groups, and among groups. For example, an individual may trust a bank to keep his money safe, and the bank may trust its tellers to not embezzle money. Though people sometimes speak of trusting things other than people, such as trusting a thermometer or money, use of the word “trust” in these contexts is derivative, because trust applies primarily to human relationships (Hardin 2001).
Second, the main purpose of trust is to facilitate cooperative social interactions, such as business, family relations, friendship, medical care, sports, education, which depend on shared expectations of behavior (Fukuyama 1995, Whitbeck 1995, Govier 1997). Imagine how difficult driving an automobile would be if motorists could not trust each other to obey the traffic laws and drive safely. Social institutions, organizations, and groups depend on a great deal of trust among people (Fukuyama 1995).
Third, trust involves risk-taking. Someone who trusts someone else expects them to use skills and sound judgment to take care of something that they care about, such as money, health, legal affairs, or secrets (Baier 1986). However, the trusting person does not know with certainty that the entrusted person will act as expected. The entrusted person could act negligently, or worse, cause deliberate harm to the trusting person. Because the entrusted person could harm, manipulate, or exploit the trusting person, the trusting person is in a position of vulnerability relative to the entrusted person (Blomqvist 1997, Baier 1986). Since trust involves risk-taking and vulnerability, trust can be easily damaged if the entrusted party does not meet his or her expectations (Baier 1986, Blomqvist 1997).
Fourth, people decide to trust others because they judge them to be trustworthy (Hardin 2006). The trusting person requires evidence that the entrusted person has qualities, such as competence, experience, sound judgment, reliability, good will, or professional or social standing, which merit the placement of trust in him or her (Tullberg 2008). Trust is different from faith because it usually based on some evidence of trustworthiness, whereas faith involves belief without evidence. Trustworthiness can be earned, enhanced, or lost (Baier 1986, Blomqvist 1997).
Fifth, trust can generate ethical and legal duties (Baier 1986). The entrusted person has an obligation to do what is expected of him or her in the relationship. For example, a doctor has a legal and ethical obligation to maintain the patient’s confidentiality and to act in the interests of the patient. Someone who signs a lease for an apartment is ethically and legally obliged to pay the rent on time. When trust implies ethical or legal obligations, it can be viewed as a form of promise-keeping: the entrusted person promises to act according to what is expected of him or her.
Trust in Scientific Research
From this brief discussion of trust, it is not difficult to see how trust can play an important role in the conduct of scientific research. First, trust is vital to promoting cooperative relationships and activities among researchers, such as collaborative work, publication, peer review, sharing data, replication of research results, teaching, and mentoring (Whitbeck 1995, Committee on Science, Engineering and Public Policy 2009). When researchers work together on a project, they trust that they will receive appropriate credit, such as authorship. When authors submit a paper or grant application for peer review, they trust that reviewers will review it competently and fairly, and will not violate confidentiality (Shamoo and Resnik 2009). When scientists read a paper published in a professional journal, they trust that the research was performed as described, that relevant information for evaluating the methods and results has been disclosed, and that the data have not been fabricated or falsified (Whitbeck 1995, Alberts and Shine 1994). Students and trainees trust that their mentors and supervisors will give them useful education and guidance, and will not exploit or mistreat them (Whitbeck 1995).
Second, trust is important in research with human subjects. Developing trust between investigators and human subjects is essential for recruiting and retaining research participants: people will not enroll in a study if they cannot trust the researchers, and they may withdraw from a study if they lose trust in the researchers (Mastroianni 2008). Additionally, trust plays a key role in the informed consent process and in protecting subjects from harm (Kass et al 1996, Miller and Weijer 2006). A number of empirical studies have documented the different ways that human subjects trust investigators: human subjects trust that investigators will follow the procedures described in the protocol (Lind et al 2007); protect them from harm (Hall et al 2006); maintain privacy and confidentiality (Hall et al 2007, Neidich et al 2008); act with competence and professionalism (McDonald et al 2008); not be unduly influenced by financial interests (Weinfurt et al 2008); disclose important information (Hall et al 2006); and follow ethical rules for the study (Lind et al 2007). Human research subjects express concrete trust in particular researchers and institutions, as well as abstract trust in scientific research as a profession (McDonald et al 2008, Lind et al 2007, Weinfurt et al 2008).
Third, trust is important in facilitating interactions between scientists and granting agencies, journals, universities, human research or animal research review boards, and other organizations or institutions involved in funding, supporting, and overseeing science. For example, government research agencies, such as the National Institutes of Health or National Science Foundation, require institutions to adhere to a variety of policies concerning the conduct of research and financial accountability as a condition of awarding a grant, and they trust that grant-supported researchers will follow these rules (Committee on Science, Engineering and Public Policy 2009, Shamoo and Resnik 2009). Journals trust that authors will follow rules concerning authorship, conflict of interest, data integrity, prior publication, copyright permissions, and so forth (Shamoo and Resnik 2009). Human research review boards trust that investigators will conduct their research as it has been described in the protocol and will promptly report unanticipated problems or significant harms to human subjects caused by the research (Shamoo and Resnik 2009).
So, trust is vital to many different relationships in scientific research. What about the relationship between the public and scientific researchers? How does this fit with the conception of trust described above? To answer this question, it is necessary to explore what is meant by “public trust.”
What is Public Trust in Scientific Research?
The word “public” comes from the Latin words publicus, which means “of or by the people.” According to Webster’s, the word “public” has at least sixteen many different meanings (Dictionary.com 2010). One of the most common meanings of “public” is the population as a whole or society (Dictionary.com 2010). When scientists, scholars, and commentators speak of “public trust in scientific research,” they usually mean “the trust that society places in scientific research.” The following passages from influential articles and reports illustrate this use of the phrase “public trust”:
Society trusts that scientific research results are an honest and accurate reflection of a researcher’s work (Committee on Science, Engineering and Public Policy 2009: ix).
The public must be able to trust the science and scientific process informing public policy decisions (Obama 2009).
The mission of the NIH Public Trust Initiative (PTI) is to enable the public to understand and to have full confidence in the research that NIH conducts and supports across the country and throughout the world (National Institutes of Health 2010).
Academic medicine is entrusted by society with the responsibility to undertake several important social missions toward improving the health of the public, including education, patient care, and research. This trust is given implicit authority by generous public funding and considerable autonomy (Schroeder et al 1989: 803).
The idea that society places trust in science seems so obvious that it hardly requires justification. Numerous authors have argued that society trusts scientific researchers in many different ways. First, society trusts researchers with public resources (Schroeder et al 1989). Many academic researchers work for government agencies under contracts or grants, and have access to laboratories, equipment, and materials paid for by the government. Researchers that work for private industry usually have been educated in institutions supported, in part, by the government (Shrader-Frechette 1994). To maintain society’s trust, scientists must exhibit good stewardship of research resources, adhere to ethical standards, and generate knowledge that has useful applications (Shrader-Frechette 1994).
Second, society trusts researchers to provide knowledge and expertise that can inform public policy. Scientific research plays an important role in policy debates concerning public health, pollution, climate change, economic development, substance abuse, energy utilization, urban planning, airline safety, and K-12 education (Obama 2009). Scientists serve on government advisory bodies and regulatory boards, and give expert testimony to legislative committees. Scientific testimony is often a major factor in criminal cases, products liability litigation, and medical malpractice lawsuits (Resnik 2009).
Third, society trusts scientists to provide knowledge that will yield beneficial applications in medicine, industry, engineering, technology, agriculture, transportation, communication, and other domains (Resnik 2009). Trust in scientific researchers is especially important in gaining public acceptance of new technologies, such as genetically engineered organisms, gene therapy, nuclear power, stem cell therapy, and nanotechnology. Public trust is essential when the risks and benefits of new technologies are not well understood, because the public must rely on scientists to make informed judgments about those new technologies (Siegrist 2000, Siegrist et al 2007).
Divergent Public Expectations of Science
While the notion that society generally places trust in science makes sense, drawing specific ethical and policy implications from this idea can be problematic because society is a highly diverse body, not a single individual or organization. The public is actually composed of many different publics (National Institutes of Health 2010). Different people in society may have different expectations of science, and therefore place different kinds of trust in science.
Consider, for example, the different expectations placed on scientific research on new medications to treat terminal illnesses, such as HIV/AIDS and cancer. Patients dying from terminal illnesses have expected—and demanded—that researchers, sponsors, and regulatory agencies make new medications available as quickly as possible. During the 1980s and 1990s, HIV/AIDS patients successfully lobbied the Food and Drug Administration to change its regulations and policies to make experimental treatments accelerate the approval of new medicines (Dresser 2001). These changes included fast-track product approval and expanded access to investigational drugs (Schüklenk 2000). Recently, a cancer patient group, Abigail Alliance, fought an unsuccessful legal battle to gain unrestricted access to medications early in the testing process, just after the conclusion of Phase I safety studies (Jacobson and Parmet 2007). Although some terminally ill patients have argued that science should provide rapid access to new, potentially life-saving treatments, scientists, government officials, companies, and public health advocates have taken a different view, arguing that new medications should be available only after undergoing rigorous testing. Accelerating the process of making drugs available to the public may benefit some patients but could also cause considerable harm to others and compromise the quality of clinical research data (Schüklenk and Lowry 2009).
Another example in which different members of society have had different expectations of science involves the enrollment of women in clinical trials. For many years, women were routinely excluded from clinical trials because it was thought that aspects of female physiology, such as the menstrual cycle, could compromise the clarity and reliability of the data. Women’s health suffered as a result of this practice, because studies of new medications often did not including data pertaining to females (Merton 1993). Women’s health advocacy groups convinced the National Institutes of Health to develop policies for including women in clinical trials, which have greatly included the participation of women in research (Dresser 2001). With regard to enrolling women in clinical trials, investigators and research sponsors and women’s health groups had different expectations of science. Investigators and sponsors were interested in obtaining clear, reliable data to support the approval of new treatments, but women’s health groups were interested in access to clinical trials. Though a compromise has been reached that includes women in clinical trials and maintains scientific rigor, these conflicting demands have been at odds for many years. Today, there remain controversies concerning the inclusion of pregnant women in clinical research, because pregnancy can have an impact on a woman’s response to medications and experimental treatments given during pregnancy can be harmful to the fetus (Dresser 2001, Bowen 2002).
A third example of divergent public expectations occurs when investigators study identifiable communities, such as Ashkenazi Jews, Amish people, Native Americans living on a particular reservation, or African tribes (Minkler 2004). Community members usually expect that the results of research will benefit them. For example, researchers who study a disease that is prevalent in a particular community may develop methods of diagnosing, preventing or treating the disease, which may benefit the community. However, investigators may make discoveries that could stigmatize or embarrass members of the community when research results are publicly disclosed. For example, investigators may discover that a gene linked to alcoholism is prevalent in a particular community, or that the community has above average rates of prostitution, drug abuse, or domestic violence. The question of how to handle these research results can lead to conflicting expectations of science. Community members may expect investigators not to publish embarrassing results, and may feel betrayed if these results are published. However, scientists, funding agencies, and other members of society may expect that all findings will be published, and they may be upset if research results are suppressed (Sharp and Foster 2002).
Animal research is yet another example of divergent expectations of science (Shamoo and Resnik 2009). Though most people in society acknowledge animal experimentation makes an important contribution to human health and well-being, a significant percentage of the population object to some types of animal research, such as risky experiments on primates, and a small, but vocal, minority believes that all animal experiments are immoral (Moore 2003). Since people have different expectations of scientists when it comes to research with animals, building trust among people who strongly support animal research may undermine the trust of those who strongly oppose it or have some objections to it.
There are many other examples of where the public has divergent expectations of scientific research, such as genetic engineering of plants and animals, human cloning, human embryonic stem cell research, and nanotechnology (Priest 2000, Nisbett 2004, National Science Foundation 2010). I will not discuss all of these here. I believe that the above examples show, however, that public trust in research is multifaceted, because different members of society have different expectations of research. It may not be possible for scientists to promote public trust consistently when the public has divergent expectations of science, because honoring one type of expectation, such as quick access to new medications to treat terminal illnesses, may interfere with a different expectation, such as protecting public health and safety. Therefore, arguments that ground ethical rules for the conduct of research on public trust are often self-contradictory, because the public may have divergent expectations of science.
A supporter of public trust in science as a policy goal could object that my critique goes too far, because there is a core set of expectations of science that the vast majority of people in society share. Virtually everyone expects science to be honest and trustful, and most people expect that scientists will not intentionally bias their results (Laine et al 2007). Most people also expect that scientists will not undertake experiments on people without their consent, or intentionally harm or exploit human research subjects. A vast majority of the members of society probably also expect that scientists will not abuse animals in research (Shamoo and Resnik 2009). Scientists who fail to honor these core expectations betray the public trust.
While there may be a core set of uncontroversial public expectations of science, it may not be easy to distinguish these expectations from peripheral and controversial ones. Most people would consider that publishing the results of research is a fundamental expectation of scientists, but as we saw earlier, even this expectation can generate ethical controversy when it conflicts with the goal of protecting communities from harm. Most people would also consider scientific rigor to a fundamental expectation of research, but as we saw earlier, this expectation can generate controversy when it conflicts with the goal of making potentially life-saving medications quickly available to patients.
To distinguish between the core expectations of science and peripheral expectations, it would be useful to survey members of the public to determine what they consider to be the most important methodological and ethical qualities of scientific research. While there have been many different surveys about the public’s understanding of and attitudes toward science, few of these have focused specifically on expectations related to the conduct of research, such as honesty, objectivity, and rigor. However, the National Science Foundation’s Science and Engineering Indicators provide useful data related to this topic (National Science Foundation 2010). Since 1972, the National Science Foundation has surveyed Americans to obtain information on their understanding of and attitude toward science and technology. These surveys have consistently found support for and confidence in science, but a lack of understanding of important scientific concepts, such as natural selection, genetics, and radiation (Miller 2004). The most recent survey lends credence to the idea that the public expects science to be honest, reliable, and objective. When asked about what makes something scientific, 80% said “conclusions based on solid evidence,” 73% said “carefully examining different interpretations of results,” and 67% said “replication of results by other scientists” (National Science Foundation 2010). While the National Science Foundation’s surveys provide some evidence for a set of core public expectations of science, they did not focus specifically on the issues related to the scientific integrity. Clearly, more research is needed in this area.
Strengthening Public Trust Arguments
If the above critiques of the concept of public trust in science have some merit, the role of appeals public trust in ethical and policy debates involving the conduct of science needs to be reexamined. The idea that the public trusts science, which makes sense in the abstract, may not be sufficiently focused to support the various ethical rules and policies that authors have tried to derive from it, because the public is not a uniform body with common set of interests. A well-focused argument that uses the need to promote public trust in science as a reason for adopting specific rules or policies would satisfy four different requirements.
First, because there are many different publics, with different goals and expectations, the argument should clearly state which public is being referred to (e.g. the general public or a specific public, such as a particular community or group). Arguments that make opaque references to the public could support inconsistent rules or policies. Second, because even the same public may have many different expectations of science, the argument should clearly state what it is that this public expects from scientists (e.g. integrity in research, socially beneficial results, protection of human subjects, etc.). Third, because the purpose of the rules or policies is to promote public trust, the argument should clearly explain how the rules or policies will help scientists fulfill the public’s expectations. Ideally, the argument should also include some evidence that the public has specific expectations, and that the rules or policies are likely to assure the public these expectations will be met. Fourth, because not all of the public’s expectations of science are equally meritorious, the argument should explain why it is important to meet the public’s expectations in this instance.
The strongest public trust arguments will satisfy these four different criteria. For example, consider a public trust argument for rules and policies concerning conflict of interest (COI) in scientific research. One could that scientific granting agencies should adopt COI rules and policies to promote the public’s trust in science because 1) the general public expects science to be trustworthy and reliable; 2) COIs rules or policies are likely help to promote trustworthiness and reliability in research; and 3) trustworthiness and reliability are important expectations that the general public has of science (American Association of Medical Colleges 2001).
Conclusion
Various authors and organizations have appealed to the importance of promoting public trust in science as a reason for adopting rules or policies pertaining to the conduct of research. One potential flaw in these arguments is that they are poorly focused, because the public is not a uniform body with a common set of interests. Well-focused arguments that use public trust to support rules or policies should specify a) which public is being referred to b) what this public expects from scientists; c) how the rule or policy will ensure that these expectations are met; and d) why it is important to meet these expectations. Additional empirical research on the public’s expectations of science will be useful in developing rules and policies and strengthening arguments.
It is also worth noting that trust-based arguments constitute an important, but not the only, strategy for justifying rules and policies pertaining to the conduct research. One can also argue that other moral considerations, such as social utility, respect for human rights, honesty, integrity, and justice, also support ethical standards for the conduct of research (Shamoo and Resnik 2009). Arguments that appeal to these other moral considerations might have merit even when trust-based arguments fall short.
Acknowledgments
This research was supported by the Intramural Program of the National Institute of Environmental Health Science (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or U.S. government.
References
- Alberts B, Shine K. Scientists and the integrity of research. Science. 1994;266:1660. doi: 10.1126/science.7992048. [DOI] [PubMed] [Google Scholar]
- American Public Health Association. [Accessed: March 2 2010];Ethical Guidelines. 2010 Available at: http://www.apha.org/programs/education/progeduethicalguidelines.htm.
- American Society for Biochemistry and Molecular Biology. [Accessed: March 2 2010];Code of Ethics. 2010 Available at: http://www.asbmb.org/Page.aspx?id=70&terms=ethics.
- Association of American Medical Colleges. Protecting Subjects, Preserving Trust, Promoting Progress: Policy and Guidelines for Individual Financial Interests in Human Subjects Research. [Accessed: March 2 2010];2001 Available at: http://www.aamc.org/research/coi/firstreport.pdf. [PubMed]
- Baier A. Trust and anti-trust. Ethics. 1986;96:231–60. [Google Scholar]
- Becker L. Trust as non-cognitive security about motives. Ethics. 1996;107:43–61. [Google Scholar]
- Blomqvist K. The many faces of trust. Scandinavian Journal of Management. 1997;13:271–86. [Google Scholar]
- Bowen A. Research involving pregnant women. In: Amdur R, Bankert E, editors. Institutional Review Board Management and Function. Boston: Jones and Bartlett; 2002. pp. 380–82. [Google Scholar]
- Committee on Science, Engineering and Public Policy. On Being a Scientist. 3. Washington, DC: National Academy Press; 2009. [Google Scholar]
- DeAngelis C. Conflict of interest and the public trust. Journal of the American Medical Association. 2000;284:2237–8. doi: 10.1001/jama.284.17.2237. [DOI] [PubMed] [Google Scholar]
- Dictionary.com. Dictionary.com Unabridged. Random House, Inc; 2010. [accessed: May 12, 2010]. Public. http://dictionary.reference.com/browse/public. [Google Scholar]
- Dresser R. When Science Offers Salvation. New York: Oxford University Press; 2001. [Google Scholar]
- Fukuyama F. Trust: The Social Virtues and the Creation of Prosperity. New York: Free Press; 1995. [Google Scholar]
- Gambetta D, editor. Trust: Making and Breaking Cooperative Relationships. Oxford: Blackwell; 1988. [Google Scholar]
- Govier T. Social Trust and Human Communities. Montreal: McGill-Queen’s University Press; 1997. [Google Scholar]
- Grady C, Hampson L, Wallen G, Rivera-Goba M, Carrington K, Mittleman B. Exploring the ethics of clinical research in an urban community. American Journal of Public Health. 2006;96:1996–2001. doi: 10.2105/AJPH.2005.071233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gurney S, Sass J. Public trust requires disclosure of potential conflicts of interest. Nature. 2001;413:565. doi: 10.1038/35098242. [DOI] [PubMed] [Google Scholar]
- Haerlin B, Parr D. How to restore public trust in science. Nature. 1999;400:499. doi: 10.1038/22867. [DOI] [PubMed] [Google Scholar]
- Hall M, Camacho F, Lawlor J, DePuy V, Sugarman J, Weinfurt K. Measuring trust in medical researchers. Medical Care. 2006;44:1048–53. doi: 10.1097/01.mlr.0000228023.37087.cb. [DOI] [PubMed] [Google Scholar]
- Hardin R. Trust. In: Becker L, Becker C, editors. Encyclopedia of Ethics. 2. New York: Routledge; 2001. pp. 1728–31. [Google Scholar]
- Hardin R. Trust. New York: Polity; 2006. [Google Scholar]
- Institute of Medicine. Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. Washington, DC: National Academy Press; 2001. [PubMed] [Google Scholar]
- Jacobson P, Parmet W. A new era of unapproved drugs: the case of Abigail Alliance v Von Eschenbach. Journal of the American Medical Association. 2007;297:205–8. doi: 10.1001/jama.297.2.205. [DOI] [PubMed] [Google Scholar]
- Kass N, Sugarman J, Faden R, Schoch-Spana M. Trust, the fragile foundation of contemporary biomedical research. Hastings Center Report. 1996;26(5):25–9. [PubMed] [Google Scholar]
- Kennedy D. Clinical trials and public trust. Science. 2004;306:1649. doi: 10.1126/science.1107657. [DOI] [PubMed] [Google Scholar]
- Lind U, Mose T, Knudsen L. Participation in environmental health research by placenta donation—a perception study. Environmental Health. 2007;6:36–43. doi: 10.1186/1476-069X-6-36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mastroianni A. Sustaining public trust: falling short in the protection of human research participants. Hastings Center Report. 2008;38(3):8–9. doi: 10.1353/hcr.0.0012. [DOI] [PubMed] [Google Scholar]
- McDonald M, Townsend A, Cox S, Paterson N, Lafrenière D. Trust in Health Research Relationships: Accounts of Human Subjects. Journal of Empirical Research on Human Research Ethics. 2008;3(4):35–47. doi: 10.1525/jer.2008.3.4.35. [DOI] [PubMed] [Google Scholar]
- Merton V. The exclusion of pregnant, pregnable, and once-pregnable people (a.k.a. women) from biomedical research. American Journal of Law and Medicine. 1993;19:369–451. [PubMed] [Google Scholar]
- Miller D. Public understanding of, and attitudes toward, scientific research: what we know and what we need to know. Public Understand of Science. 2004;13:273–294. [Google Scholar]
- Miller P, Weijer C. Trust based obligations of the state and physician-researchers to patient-subjects. Journal of Medical Ethics. 2006;32:542–47. doi: 10.1136/jme.2005.014670. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minkler M. Ethical challenges for the “outside” researcher in community-based participatory research. Health Education and Behavior. 2004;31:684–97. doi: 10.1177/1090198104269566. [DOI] [PubMed] [Google Scholar]
- Moore D. Public lukewarm on animal rights. The Gallup Organization; 2003. May 21, [Accessed: March 13 2010]. Available at: http://www.gallup.com/poll/8461/public-lukewarm-animal-rights.aspx. [Google Scholar]
- National Science Foundation. [Accessed: May 12 2010];Science and Technology Indicators 2010. 2010 Available at: http://www.nsf.gov/statistics/seind10/c/cs1.htm.
- National Institutes of Health. [Accessed: May 12 2010];NIH Public Trust. 2010 Available at: http://publictrust.nih.gov/index.cfm.
- Neidich A, Joseph J, Ober C, Ross L. Empirical data about women’s attitudes toward a hypothetical pediatric biobank. American Journal of Medical Genetics Part A. 2008;146A:297–304. doi: 10.1002/ajmg.a.32145. [DOI] [PubMed] [Google Scholar]
- Nisbett M. Public opinion about stem cell research and human cloning. Public Opinion Quarterly. 2004;68:131–54. [Google Scholar]
- Obama B. [Accessed: May 12 2010];Memorandum for the Heads of Executive Departments and Agencies: Scientific Integrity. 2009 March 9; Available at: http://www.whitehouse.gov/the_press_office/Memorandum-for-the-Heads-of-Executive-Departments-and-Agencies-3-9-09/
- Priest S. US public opinion divided over biotechnology? Nature Biotechnology. 2000;18:939–42. doi: 10.1038/79412. [DOI] [PubMed] [Google Scholar]
- Resnik D. Playing Politics with Science. New York: Oxford University Press; 2009. [Google Scholar]
- Rosenberg J. The Practice of Philosophy. Upper Saddle River, NJ: Prentice-Hall; 1995. [Google Scholar]
- Schroeder S, Zones J, Showstack J. Academic medicine as a public trust. Journal of the American Medical Association. 1989;262:803–12. [PubMed] [Google Scholar]
- Schüklenk U. Access to Experimental Drugs in Terminal Illness. London: Informa Healthcare; 2000. [Google Scholar]
- Schüklenk U, Lowry C. Terminal illness and access to Phase 1 experimental agents, surgeries and devices: reviewing the ethical arguments. British Medical Bulletin. 2009;89:7–22. doi: 10.1093/bmb/ldn048. [DOI] [PubMed] [Google Scholar]
- Shamoo A, Resnik D. Responsible Conduct of Research. 2. New York: Oxford University Press; 2009. [Google Scholar]
- Sharp R, Foster W. Community involvement in ethical review of genetic research: lessons from American Indian and Alaska Native populations. Environmental Health Perspectives. 2002;110 (Supplement 2):145–48. doi: 10.1289/ehp.02110s2145. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shrader-Frechette K. Ethics of Scientific Research. Lanham, MD: Rowman and Littlefield; 1994. [Google Scholar]
- Siegrist M. The influence of trust and perceptions of risks and benefits on the acceptance of gene technology. Risk Analysis. 2000;20:195–203. doi: 10.1111/0272-4332.202020. [DOI] [PubMed] [Google Scholar]
- Siegrist M, Keller C, Kastenholz H, Frey S, Wiek A. Laypeople’s and experts’ perception of nanotechnology hazards. Risk Analysis. 2007;27:59–69. doi: 10.1111/j.1539-6924.2006.00859.x. [DOI] [PubMed] [Google Scholar]
- Tullberg J. Trust—the importance of trustfulness versus trustworthiness. The Journal of SocioEconomics. 2008;37:2059–71. [Google Scholar]
- Weinfurt K, Hall M, Dinan M, DePuy V, Friedman J, Allsbrook J, Sugarman J. Effects of disclosing financial interests on attitudes toward clinical research. Journal of General Internal Medicine. 2008;23(6):860–6. doi: 10.1007/s11606-008-0590-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Whitbeck C. Truth and trustworthiness in research. Science and Engineering Ethics. 1995;1:403–16. doi: 10.1007/BF02583258. [DOI] [PubMed] [Google Scholar]