Abstract
“Ethical disasters” or egregious violations of professional ethics in medicine often receive substantial amounts of publicity, leading to mistrust of the medical system. Efforts to understand wrongdoing in medical practice and research are hampered by the absence of a clear taxonomy. This article describes the authors’ process of developing a taxonomy based on: (1) reviews of academic literature, ethics codes, government regulations, and cases of wrongdoing; (2) consultation with experts in health law and healthcare ethics; and (3) application of the taxonomy to published cases of wrongdoing in medical research and practice. The resulting taxonomy includes 14 categories of wrongdoing in medical practice, and 15 categories of wrongdoing in medical research. This taxonomy may be useful to oversight bodies, researchers who seek to understand and reduce the prevalence of wrongdoing in medicine, and librarians who index literature on wrongdoing.
Introduction
“Ethical disasters” or egregious violations of professional ethics in medicine often receive substantial amounts of publicity, which feeds mistrust of the medical system, which in turn discourages access to health care and participation in medical research.1–3 Well-known examples of ethical disasters in medical research include the Tuskegee syphilis study,4 the death of Jesse Gelsinger in a nontherapeutic genetic experiment,5 and U.S. government radiation experiments on patients.6 Highly publicized examples of alleged misbehavior in clinical practice include the performance of unnecessary cardiac surgeries to generate profits7 and trading prescription opiates for sex.8
In recent years, professional associations such as the American Association of Medical Colleges (AAMC),9–11 Senators such as Chuck Grassley of Iowa,12 and journalists13–16 have taken an interest in conflicts of interest, including kickbacks, industry funding of research, and physician-owned diagnostic facilities. Drawing on social science data,17 AAMC has argued that such conflicts may compromise the care of patients and the integrity of medicine.9–11 In principle, they could contribute to misbehaviors such as conducting risky research with inadequate informed consent or performing unnecessary procedures.
While receiving markedly less attention, other factors have also been hypothetically linked to professional wrongdoing, such as being in a position of authority,18,19 oversight failures,20,21 and serving especially vulnerable populations.22,23 However, as with conflicts of interest, there are few data that demonstrate actual connections between these environmental factors and wrongdoing in medicine. In the absence of reliable data on the factors that cause or facilitate wrongdoing in medicine, it is difficult to craft appropriate prevention plans. One would risk developing plans that are overly burdensome on researchers and providers while being ineffective.
Efforts to understand wrongdoing in medicine are thwarted by the lack of a standard and appropriate framework for indexing publications (e.g., using MeSH headings), reporting instances of wrongdoing, and sampling cases of wrongdoing for inclusion in research projects. Appropriate MeSH headings, particularly within MEDLINE’s hierarchic structure, are non-existent. The Federation of State Medical Boards (FSMB) labels wrongdoing as “not applicable” in 66% of cases and as “unprofessional conduct” in 34% of cases; neither refers to specific misbehaviors. This compounds other problems in assessing the frequencies of wrongdoing in medicine: legal data on cases that settle out of court are confidential; wrongdoing is frequently under-reported;24–28 and when it is reported, wrongdoing is often counted only when individuals are actually disciplined.24,29,30
The UPWARD program (Understanding and Preventing Wrongdoing in American Healthcare Research and Delivery) seeks to understand the complex variables that causally contribute to wrongdoing in order to identify appropriate prevention strategies involving education, policy, and oversight. The program is currently supported by two grants, a BF Charitable Foundation seed grant and a NIH (NCRR/ORI) R21 award. These two grants will enable the analysis of cases of wrongdoing in medicine and health research. Three hundred cases have been screened for inclusion in the project. However, developing a valid and reliable taxonomy of wrongdoing in medical practice and research has proven to be a necessary first step to guide literature reviews, sampling, and statistical analyses (e.g., comparing factors predictive of fraud versus improper prescribing).
For research, oversight, and library database indexing purposes, an adequate taxonomy would have the following properties. It would be:
Comprehensive, enabling the categorization of all forms of wrongdoing in both medical practice and research;
Focused on enforceable, noncontroversial norms of professional ethics that are valid nationwide; and
Sufficiently clear and concise (with nonredundant categories) to support adequate inter-rater reliability.
The authors’ extensive review of the literature identified several lists of wrongdoing in medical practice or research; however, none possessed all of these properties.30–33 For example, the Federation of State Medical Boards (FSMB), which collects reports of actions against physicians by state licensing boards, provides a list that is unclear because “not applicable” is its most commonly used category (presumably indicating no state law applied); it is redundant because it draws from the laws of 50 states and these laws overlap substantially even as they differ in name and scope; and it ignores most violations of research ethics.
Accordingly, the authors decided to develop taxonomies of wrongdoing in medical practice and research that would meet the criteria listed above. This article describes the process used to develop the taxonomies, presents results, and explores possible uses for the taxonomies.
Literature Search Methods
Conducting a literature review in a new area of inquiry is uniquely challenging. Within the MEDLINE database, MeSH headings serve to index articles using a hierarchic tree structure, thus facilitating searches. However, in a young field that lacks a clear taxonomy of topics, some relevant MeSH headings may not exist and it may be difficult to identify which existing MeSH headings are most appropriate. At the same time, it would be difficult, if not foolish, to develop a taxonomy without first consulting published papers on the subject matter. The process of conducting a literature review and a content analysis is presented below in a linear fashion in sections 1 and 2; in fact, however, it was a dialectic process. A literature review was conducted using a wide variety of keywords with intuitive appeal, preliminary analyses were performed, the results of the analyses were used to return to the literature review, and then further analyses were done.
Literature reviews were conducted using MEDLINE. Two kinds of searches were conducted, the first aimed at identifying existing taxonomies or lists of wrongdoing in medicine, the second, aimed at identifying articles describing the specific kinds of wrongdoing that would be included in a taxonomy. The authors searched for existing taxonomies using two major sources of data: (1) reports by oversight boards, watchdog groups, and agencies; and (2) studies of wrongdoing. Using material from the resulting searches and the lists of wrongdoing contained in reports by oversight boards, numerous searches for specific misbehaviors were conducted.
Reports by Oversight Boards, Watchdog Groups, and Agencies
In the realm of medical practice, this involved examining reports from the FSMB,34 Public Citizens Health Research Group,35 and the National Practitioners Databank.36–38 In the realm of research ethics, this involved reviewing guidance, reports, and data from the Office of Human Research Protections (OHRP),39,40 the Office of Research Integrity (ORI),41 the Office of the Inspector General (OIG),42 and the Office of Laboratory and Animal Welfare (OLAW).
Studies of Wrongdoing
The authors searched for review articles and studies of the kinds and frequencies of wrongdoing in medical research and practice. The authors relied on MedLine for searches, initially using combinations of very general terms. In the area of research, the following general MeSH headings were used: “Punishment,” “employee discipline,” “professional misconduct,” “clinical ethics,” and “professional ethics”; and the following general keywords: “Misbehavior,” “professional misbehavior,” “ethics violations,” “medical misbehavior,” “physician misconduct,” “wrongdoing,” “unethical behavior,” and “discipline.” When results were too unfocused, search terms were then combined with terms such as “frequency” and/or “physician.” In the area of medical research, the following general MeSH headings were used: “Research integrity,” “research ethics,” “scientific misconduct,” and several general ethical terms such as “professional ethics” and “ethics” in combination with the terms “research,” “research subjects,” “human experimentation,” and “biomedical research,” and additionally the keyword “research misconduct.”
Taxonomic Content Analysis of Literature and Cases
Using results from these searches, a taxonomic content analysis was performed by the three authors using criteria the authors developed for an ideal taxonomy.43,44 The criteria stipulate that an ideal list will be:
Comprehensive, enabling the categorization of all forms of wrongdoing in both medical practice and research;
Focused on noncontroversial norms of professional ethics that are valid nationwide as evidenced by laws or codes of ethics and actual cases of enforcement or disciplinary actions; and
Sufficiently clear and concise (with nonredundant categories) to support adequate inter-rater reliability.
To ensure comprehensiveness and validate that medical professionals unambiguously viewed the listed behaviors as wrongdoing, the authors:
Re-reviewed previous lists and review articles to ensure all items within their lists could be subsumed within a category of the current taxonomy;
Examined medical ethics and research ethics codes, considering whether violations of codes were covered in the taxonomy;
Interviewed experts (two ethicists and three health lawyers) who reviewed the taxonomy; and
Ensured that all forms of wrongdoing identified in the 300 cases of wrongdoing in medicine screened by the UPWARD Project team thus far could be subsumed under a category in the taxonomy and conversely that all taxonomic categories were represented in at least one case.
Testing Inter-rater Reliability
To ensure that the taxonomy categories were sufficiently clear to enable reliable use, the authors developed a slide show presentation with 50 excerpts from the 300 cases of wrongdoing screened for inclusion in the UPWARD study. Each slide aimed to present one discrete form of wrongdoing with one to three slides representing each of the 29 forms of wrongdoing found in the taxonomy. The slideshow was presented to three members of the UPWARD research team who were not directly involved in the development of the taxonomy; they received training on the meaning of the terms, but were not involved in the literature reviews or content analysis process.
Raters were not told whether or how often each item appeared in the slides. The raters were given a score sheet and the taxonomy and were asked to identify the category of wrongdoing in the taxonomy that was represented in the slide. Their score sheets were then used to calculate (1) a percentage correct using the responses intended by the authors as the “right” answers, and (2) inter-rater reliability scores among the three raters.
Literature Search Results
As the purpose of the literature review was not to generate a systematic review article but rather to generate a taxonomy using the dialectic approach described above, only general findings regarding MeSH headings are presented here; illustrative results are presented in the next section to justify the resulting taxonomy.
There is no overarching MeSH heading such as “wrongdoing” used to capture the general domain of wrongdoing in medical practice and research. MeSH headings for related general keywords such as “ethics violations” or “physician misbehavior” do not exist; when used as keywords, the results are extremely poor (12 articles and one article returned respectively). From an intuitive perspective, “professional misconduct” seems the most appropriate current MeSH heading. However, its use is more prominent in the UK and Commonwealth of Nations (eight of the first ten MedLine entries are from UK or Commonwealth journals). Further, the MeSH heading does not provide specific forms of physician wrongdoing (e.g., violations of informed consent or privacy) as subheadings that a researcher could select.
Taxonomic Content Analysis
Table 1 presents a summary of the resulting taxonomies of wrongdoing in medical practice and research with examples of specific misbehaviors that fall within a general category. The authors found that all of the fundamental forms of wrongdoing described in their data sources (codes of ethics, oversight board reports, review articles, and the 300 cases of wrongdoing in the UPWARD study) could be subsumed under one of the resulting taxonomic categories. Conversely, they found that all taxonomic categories were represented in at least one published case of wrongdoing screened for the UPWARD study. The comprehensiveness and validity of the categories were further confirmed through consultation with the external experts.
Table 1. Table 1.
Taxonomy of wrongdoing
| I. Taxonomy of misbehavior in medical research | Taxonomy of clinical medical ethics violations by physicians |
|---|---|
R1. Procedural violations
|
P1. Procedural violations
|
R2. Violation of privacy or confidentiality
|
P2. Violation of privacy or confidentiality
|
R3. Failure of informed consent
|
P3. Failure of informed consent
|
R4. Failure to provide oversight as required
|
P4. Failure to provide oversight as required
|
R5. Financial fraud or theft
|
P5. Financial fraud or theft
|
R6. Conflict of interest violation
|
P6. Conflict of interest violation
|
R7. Assault of participants
|
P7. Assault of patients
|
R8. Professional boundary violations
|
P8. Professional boundary violations
|
R9. Inappropriate interprofessional relationships
|
P9. Inappropriate interprofessional relationships
|
R10. Unjust or inappropriate participant recruitment, exclusion, or inclusion
|
P10. Unjust treatment of patients
|
R11. Substandard research or peer review
|
P11. Substandard or inappropriate patient care
|
R12. Inappropriate exposure to or management of the risks
|
P12. Practice of medicine while impaired or incompetent
|
R13. Inappropriate publication practice
|
P13. Abuse of prescribing privileges/ violation of drug statutes
|
R14. Research misconduct (FFP)
|
P14. Illegal conduct indicating unfit character (not covered in the categories above)
|
R15. Inappropriate use, care, and maintenance of animals
|
Wrongdoing in medical practice
The authors identified 14 fundamental kinds of wrongdoing in medical practice. Table 2 shows a comparison of the taxonomy to existing taxonomies or lists. It is difficult to determine the degree of overlap among the taxonomies given that many categories from competing taxonomies are ambiguous or represent subcategories within the current system. Appendix A (available online at www.ajpmonline.org) presents various sources of data (the AMA code of ethics, MEDLINE search-term headings, and illustrative articles) that provide evidence that the categories include only noncontroversial norms of professional ethics that are valid nationwide.
Table 2.
Taxonomy of wrongdoing in medical practice compared to alternative lists
| Taxonomy of clinical medical ethics violations by physiciansa |
Federation of state medical boards (percentage of sanctions 1994–2002)b |
Public citizen health research group (percentage of sanctions 1990–1999)b |
Categories of the bases for physician discipline in Missouric |
|---|---|---|---|
| P1. Procedural violations |
|
|
|
| P2. Violation of privacy or confidentiality | |||
| P3. Failure of informed consent | |||
| P4. Failure to provide oversight as required | |||
| P5. Financial fraud or theft |
|
||
| P6. Conflict of interest violation | |||
| P7. Assault of patients |
|
|
|
| P8. Professional boundary violations |
|
||
| P9. Inappropriate interprofessional relationships |
|
|
|
| P10. Unjust treatment of patients | |||
| P11. Substandard or inappropriate patient care |
|
|
|
P12. Practicing while impaired or incompetent
|
|
|
|
| P13. Improper prescribing/violation of drug statutes |
|
|
|
| P14. Illegal behavior indicating unfit character |
|
|
|
Wrongdoing in Medical Research
Fifteen fundamental kinds of wrongdoing in medical research were identified. Table 3 contrasts the taxonomy to lists employed in two survey studies that assess the frequency of various misbehaviors. The lists of wrongdoing addressed in the two surveys included 56% and 69% of the categories in the authors’ taxonomy. Appendix B (available online at www.ajpmonline.org) presents sources of data supporting the validity of the taxonomy of wrongdoing in medical research.
Table 3.
Taxonomy of Wrongdoing in Research Compared to Alternative Lists
| I. Taxonomy of Wrongdoing in Medical Researcha |
II. Ethical Problems in Academic Researchb |
III. Questionable Behaviors in Researchc |
|---|---|---|
| R1. Procedural violations |
|
|
| R2. Violation of privacy or confidentiality |
|
|
| R3. Failure of informed consent |
|
|
| R4. Failure to provide oversight as required | ||
| R5. Financial fraud or theft |
|
|
| R6. Conflict-of-interest violation |
|
|
| R7. Assault of participants | ||
| R8. Professional boundary violations |
|
|
| R9. Inappropriate interprofessional relationships |
|
|
| R10. Unjust or inappropriate participant recruitment, exclusion, or inclusion |
|
|
| R11. Substandard research or peer review |
|
|
| R12. Inappropriate exposure to or management of the risks |
|
|
| R13. Inappropriate publication practice |
|
|
| R14. Research misconduct |
|
|
| R15. Inappropriate use, care and maintenance of animals |
Examples of subcategories are meant to be illustrative, not exhaustive.
From a survey by Swayze et al.32 The survey clustered behaviors into three categories used by the National Academy of Sciences to delineate problematic behaviors in academic research. Their behaviors also included “Other: cheating in coursework,” which was excluded here because this appears to fall outside the scope of research; to the extent it occurs in research in the classroom, it would likely be redundant with other categories (e.g., plagiarism or fabrication).
From Martinson et al.’s survey of scientists’ behaviors in research.33
Inter-Rater Reliability
Using the authors’ intended categorizations as normative, the three independent raters had a 92.6% (139/150) correct rate with individual scores ranging from 88% to 96%. Raters categorized behaviors differently from the authors for several reasons: Some kinds of wrongdoing typically involve other kinds of wrongdoing (e.g., procedural violations often involve some other substantive violation and fraud often involves unnecessary procedures/substandard care); oversight failures may involve “guilt by association” with the subordinate’s wrongdoing; and some categories are distinct but similar and require careful determination and interpretation of facts (as often occurs in court cases), such as “boundary violations” versus “assault.” In many cases, the raters noted that they were torn between their chosen response and the “correct” response suggesting that allowing raters to use more than one category to describe complex instances of wrongdoing would legitimately solve some of the encountered problems.
To determine inter-rater reliability, a free marginal Kappa score was calculated, which is appropriate for rating tasks that involve more than two raters freely choosing among a series of categoric responses. The Kappa score for the three raters was 0.85, which is considered excellent particularly with a task involving a large number of categories.
Discussion
A taxonomy was developed that is comprehensive, focused on well-recognized national professional standards, and sufficiently clear and nonredundant to support excellent inter-rater reliability. As noted above, raters disagreed in few instances, none of which indicated a need for revision of the taxonomy. Allowing raters to use more than one appropriate category to describe complex scenarios would resolve most issues.
In addition to supporting research on wrongdoing, the resulting taxonomy could be used to generate a hierarchic indexing system in MEDLINE and to improve reporting systems. As Table 2 indicates, within the sphere of medical practice, competing lists overlapped very little with the authors’ taxonomy because it excludes ambiguous headings (e.g., “not applicable” and “discipline by another state”) and subsumes several misbehaviors under “procedural violations” (e.g., “violations of previous agreement with board” and “CME or continuing medical education”). Eliminating vague categories from frequency data reports would yield substantially greater and more useful information (e.g., more precise frequency statistics and correlated data). This is also true in the realm of medical research. While current lists are less ambiguous, they also omit important categories of wrongdoing, including “failures to provide oversight as required” and “inappropriate use or care of animals.” Reporting of wrongdoing in research—whether to bodies such as the NIH or within the scope of research projects—could be improved with the taxonomy.
One limitation of the taxonomy is that it pertains primarily to the contexts of healthcare delivery and medical research. Physicians engage in other activities, such as preventive medicine, medical education, and administration. While the taxonomy does not speak to these directly, and each of these activities will have its own unique guidelines, the current taxonomies can provide a useful starting point in developing taxonomies in these other arenas. Just as the taxonomies of wrongdoing in healthcare practice and research overlap substantially (11 categories in each are highly analogous), the taxonomies can subsume by analogy most forms of wrongdoing in other arenas of medicine.
To take the example of the American College of Preventive Medicine’s Code of Ethics,45 most violations of its 11 “Principles for the Ethical Conduct of Physicians Engaged in Preventive Medicine” fit nicely into one of the 29 categories used in the current study. Some are obvious such as “avoiding conflicts of interest,” “nondiscrimination,” or “privacy and confidentiality”; violations of others such as “acting based on evidence” and “complementing personal limitations” might fit within “substandard research or care.”
The only principles that are not readily subsumed are those that might be described as aspirational (sincere but non-enforced guidelines for action [e.g., exhibiting “positive health behaviors so as to be health role models for their communities and colleagues” (principle L]). However, this limitation applies to the contexts of healthcare delivery and research as well: In the current study, the focus is on only those violations of principles that are noncontroversially viewed by professional colleagues as wrongdoing and are enforced. And while habitually eating nachos in front of a ballgame during spare time may fail to meet legitimate professional aspirations, it is not clear that it rises to the level of wrongdoing that is both prohibited and enforced by the profession. However, as professional standards evolve aspirations may become strict and enforced obligations; when that happens, taxonomies of wrongdoing will need to be updated.
Supplementary Material
Acknowledgments
This project was made possible with support from grants UL1RR024992 and 1R21RR026313 from the NIH National Center for Research Resources (NCRR) and a seed grant from the BF Charitable Foundation.
The authors wish to thank Sandra Johnson and Shane Levesque for their assistance in the conceptualization of the taxonomy. Thanks also to Pamela Amsler, whose skillful database management has been invaluable, and the following members of the UPWARD Project, whose usage and feedback contributed substantially to the validation of the taxonomies: Emily Anderson, Kelly Carroll, Tyler Gibb, and Timothy Rubbelke. Finally, special thanks to Joan Killgore and Sreenivasa Rao Dandamudi, who offered valuable practical insights during the development of this project, and the AJPM peer reviewers who suggested substantial improvements to the paper.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
No financial disclosures were reported by the authors of this paper.
References
- 1.Boulware LE, Ratner LE, Cooper LA, Sosa JA, LaVeist TA, Powe NR. Understanding disparities in donor behavior: Race and gender differences in willingness to donate blood and cadaveric organs. Medical Care. 2002;40(2):85–95. doi: 10.1097/00005650-200202000-00003. [DOI] [PubMed] [Google Scholar]
- 2.Shavers V, Lynch C, Burmeister L. Knowledge of the Tuskegee study and its impact on the willingness to participate in medical research studies. J Natl Med Assoc. 2000;92:563–572. [PMC free article] [PubMed] [Google Scholar]
- 3.Gamble VN. A legacy of distrust: African Americans and medical research. Am J Prev Med. 1993;9:35–39. [PubMed] [Google Scholar]
- 4.Jones JH. Bad blood: The Tuskegee syphilis experiment. 2nd Revised ed. New York, NY: Free Press; 1993. [Google Scholar]
- 5.Gelsinger P. Jesse's intent. In: Bankert E, Amdur R, editors. Institutional Review Board: Management and function. Sudbury, MA: Jones and Barlett; 2006. pp. xi–xix. [Google Scholar]
- 6.Moreno JD. Undue risk: Secret State experiments on humans. New York, NY: W.H. Freeman and Company; 2000. [Google Scholar]
- 7.Klaidman S. Coronary: A True Story of Medicine Gone Awry. New York: cribner; 2007. [Google Scholar]
- 8.Hoffman E. Doctor pleads guilty in drugs-for-sex case. Post-Gazette. 2002 July 2; [Google Scholar]
- 9.Association of American Medical Colleges. The Scientific Basis of Influence and Reciprocity: A Symposium. Association of American Medical Colleges; Washington, D.C.. 2007. Jun 12, [Google Scholar]
- 10.Association of American Medical Colleges. Protecting Patients, Preserving Integrity, Advancing Health: Accelerating the Implementation of COI Policies in Human Subjects Research. 2008 Feb 26;:1–87.
- 11.Association of American Medical Colleges. In the Interest of Patients: Recommendations for Physician Financial Relationships and Clinical Decision Making. 2010 Jun 28;:1–46.
- 12.Probing doctors' ties to industry. Washington Post: 2009. Aug 18, Caputo I. [Google Scholar]
- 13.State legislatures, medical schools target gifts, financial conflicts. American Medical News. 2009 April 20; News in brief. [Google Scholar]
- 14.Drug firms' cash skews doctor classes. Milwaukee Journal Sentinel. 2009 March 29; [Google Scholar]
- 15.Goldstein J, Winslow R. Doctors Urge Limits on Drug Firm Money. The Wall Street Journal. 2009 April 1; B: 4. [Google Scholar]
- 16.Rabin RC. Doctors Urge End to Corporate Ties The New York Times. 2009 April 2; [Google Scholar]
- 17.Dana J, Loewenstein G. A social science perspective on gifts to physicians from industry. JAMA. 2003 Jul 9;290(2):252–255. doi: 10.1001/jama.290.2.252. [DOI] [PubMed] [Google Scholar]
- 18.Milgram S. Some conditions of obedience and disobedience to authority. Human Relations. 1965;18(1):57–76. [Google Scholar]
- 19.Trevino L, Butterfield K, McCabe D. The Ethical Context in Organizations: Influences on Employee Attitudes and Behaviors. Business Ethics Quarterly. 1998;8(3):447–476. [Google Scholar]
- 20.Bramstedt KA, Kassimatis K. A study of warning letters to institutional review boards by the U.S. Food and Drug Administration Clinical Investigational Medicine. 2004;27(6):316–323. [PubMed] [Google Scholar]
- 21.Marshall E. Clinical Research; Shutdown of research at Duke Sends a Message. Science. 1999;284(5418) doi: 10.1126/science.284.5418.1246a. [DOI] [PubMed] [Google Scholar]
- 22.Zimbardo P. The Lucifer Effect. New York: Random House, Inc.; 2007. [Google Scholar]
- 23.Bandura A, Underwood B, Fromson M. Disinhibition of agression through diffusion of responsibility and dehumanization of victims. Journal of Research in Personality. 1975;9:253–269. [Google Scholar]
- 24.Yeon HB, Lovett DA, Zurakowski D, Herndon JH. Physician discipline. J Bone Joint Surg Am. 2006 Sep 1;88(9):2091–2096. doi: 10.2106/JBJS.F.00524. [DOI] [PubMed] [Google Scholar]
- 25.Titus SL, Wells JA, Rhoades LJ. Repairing research integrity. Nature. 2008 Jun 19;453(7198):980–982. doi: 10.1038/453980a. [DOI] [PubMed] [Google Scholar]
- 26.Marshall E. Scientific misconduct. How prevalent is fraud? That's a million-dollar question. Science. 2000 Dec 1;290(5497):1662–1663. doi: 10.1126/science.290.5497.1662. [DOI] [PubMed] [Google Scholar]
- 27.Suchetka D. Despite law, many U.S. hospitals aren't reporting disciplinary action to a national database. [Accessed September 8, 2010];2010 http://www.cleveland.com/healthfit/index.ssf/2010/04/many_us_hospitals_arent_report.html.
- 28.DesRoches CM, Rao SR, Fromson JA, et al. Physicians' perceptions, preparedness for reporting, and experiences related to impaired and incompetent colleagues. JAMA. 2010 Jul 14;304(2):187–193. doi: 10.1001/jama.2010.921. [DOI] [PubMed] [Google Scholar]
- 29.Cardarelli R, Licciardone JC. Factors associated with high-severity disciplinary action by a state medical board: a Texas study of medical license revocation. J Am Osteopath Assoc. 2006 Mar 1;106(3):153–156. [PubMed] [Google Scholar]
- 30.Grant D, Alfred K. Sanctions and recidivism: An evaluation of physician discipline by state medical boards. Journal of Health Politics, Policy, and Law. 2007;32(5):867–888. doi: 10.1215/03616878-2007-033. [DOI] [PubMed] [Google Scholar]
- 31.Crites E. The regulation and discipline of physicians in Missouri. St. Louis Metropolitan Medicine. 2009;31(4):18–20. [Google Scholar]
- 32.Swayze J, Anderson MS, Lewis KS. Ethical problems in academic research. American Scientist. 1993;81:542–553. [Google Scholar]
- 33.Martinson B, Anderson M, DeVries R. Scientists behaving badly. Nature. 2005;435:737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
- 34.Federation of State Medical Boards. Summary of 2008 Board Actions. 2009 [Google Scholar]
- 35.Wolfe SMP, Bame A, Adler B. Washington DC: Public Citizen's Health Research Group; 2000. 20,125 questionable doctors disciplined by state and federal governments. [Google Scholar]
- 36.Ryzen E. The National Practitioners Databank: Problems and proposed reforms. J Legal Med. 1992;13:409. doi: 10.1080/01947649209510892. [DOI] [PubMed] [Google Scholar]
- 37.Walters T, Parsons J, Warnecke R, Almagor O, Budetti PP. How useful is the informations provided by the National Practitioner Data Bank? Joint Commission J Quality & Patient Safety. 2003;29(8):416–424. doi: 10.1016/s1549-3741(03)29050-x. [DOI] [PubMed] [Google Scholar]
- 38.Chandra ASN, Seabury SA. The growth of physician medical malpractice payments: Evidence from the National Practitioner Data Bank. Health Affairs. 2005 doi: 10.1377/hlthaff.w5.240. [DOI] [PubMed] [Google Scholar]
- 39.Borror K, Carome M, McNeilly P, Weil C. A review of OHRP compliance oversight letters. IRB: Ethics & Human Research. 2003;25(5):1–4. [PubMed] [Google Scholar]
- 40.Weil CRL, McNeilly P, Cooper K, Borror K, Andreason P. OHRP compliance oversight letters: An update. IRB: Ethics & Human Research. 2010;32(2):1–6. [PubMed] [Google Scholar]
- 41.Office of Research Integrity. Annual Report. 2008
- 42.Office of the Inspector General. Semiannual report to Congress. 2009
- 43.Ryan GW, Bernard RH. Data management and analysis methods. In: Denzin NK, Lincoln YS, editors. Handbook of qualitative research, 2nd Edition. Thousand Oaks, CA: Sage Publications; 2000. pp. 769–802. [Google Scholar]
- 44.Baxter LA. Chapter 13: Content analysis. In: Montgomery BM, Duck S, editors. Studying interpersonal interaction. New York: The Guilford Press; 1991. pp. 235–254. [Google Scholar]
- 45.American College of Preventive Medicine. Code of Ethics. Washington DC: ACPM; 2009. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
