Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2006 Jun 29.
Published in final edited form as: J Empir Res Hum Res Ethics. 2006 Mar;1(1):43–50. doi: 10.1525/jer.2006.1.1.43

Normal Misbehavior: Scientists Talk About the Ethics of Research

Raymond De Vries 1,, Melissa S Anderson 2, Brian C Martinson 3
PMCID: PMC1483899  NIHMSID: NIHMS10418  PMID: 16810336

Abstract

those concerned with protecting the integrity of science generally focus on the serious but rare infractions of falsification, fabrication, and plagiarism (FFP). While the violations of FFP are clear threats to the quality of scientific work and public trust in science, are they the behaviors that researchers themselves find most troubling? Noticing that scientists seldom are asked to report their perceptions of the behaviors that pose problems for the enterprise of science, we conducted six focus groups with researchers from major research universities. A total of 51 scientists participated in our focus-group discussions, which lasted from 1.5 to 2 hours each. We found that while researchers were aware of the problems of FFP, in their eyes misconduct generally is associated with more mundane, everyday problems in the work environment. These more common problems fall into four categories: the meaning of data, the rules of science, life with colleagues, and the pressures of production in science. Focus on the “normal misbehaviors” that are part of the ordinary life of researchers allows us to see the way the organization of science generates both compliance and deviance from ethical norms.

Keywords: scientific misbehavior, integrity, scientists’ perceptions, deviance


Research misconduct makes news. In late 2005 and early 2006, the scientific community and the public were in an uproar over the fabrication of data related to the creation of patient-specific stem cells by Korean researchers (Chong & Normile, 2006; Couzin, 2006; Kolata, 2005). The stem cell case is not unique: over the past few years the national media have reported other examples of data fabrication (Chang, 2002; Kintisch, 2005) along with cases of insufficient protection of human subjects (Evans, Smith & Willen, 2005; Argetsinger, 2001), fraudulent use of government research grants (Wysocki, 2005), conflicts of interest (Meier, 2005; Stolberg, 2000), and purposeful misinterpretation of research findings (Wade, 2002).

Not only do these cases make spectacular headlines, but, ironically, they give the appearance of confirming the integrity of science: wrongdoers are caught and disciplined, assuring the public that the bad apples of science cannot long survive. But there is another, more important story to tell about the behavior of scientists. This story begins not with the egregious violations of a few, but with behaviors that are more common and more worrisome to researchers, if less visible to the public. More than 20 years ago, Chubin (1985) pointed out that a number of behaviors—ranging from the serious (plagiarism and fabrication) to the not-so-serious (improper acknowledgment of collaborators)—slow scientific progress, undermine trust in the research process, waste public funds, and increase external regulation of science. He also observed that these behaviors are an expected outgrowth of the organization of professional life. They are as much an issue for science as they are for other professions such as law, medicine, and business. Chubin’s description of “research malpractice” is provocative, but it has not generated empirical studies of the behavior of scientists, nor has it redirected the focus of policymakers from serious violations of the norms of science to mundane misbehaviors. Interestingly, those who know science best—scientists themselves—are almost never asked to describe the behaviors they regard as most threatening to the integrity of their work.1

In this paper, we report what researchers told us about misconduct. Using data from a series of focus groups, we describe the kinds of behaviors working scientists believe to be most threatening to the integrity of the research enterprise.2 The observations of these scientists about misbehavior expand the discussion of research misconduct in at least two important ways.

First, in terms of policy, it moves the conversation from the organizational level to the lab. At present, policy concerning research misconduct is debated among professional associations and government agencies. In 2002, the Office of Research Integrity (ORI) proposed a wide-ranging survey intended to gather empirical evidence of a variety of problems that can undermine research integrity (Holden, 2002). The Federation of American Societies for Experimental Biology (FASEB) and the Association of American Medical Colleges (AAMC) formally objected to the ORI proposal.3 Fearing increased oversight and regulation, the FASEB and the AAMC wanted the ORI to limit their research to the current and terse definition of misconduct used by the federal government: “falsification, fabrication, or plagiarism” (FFP) that represents “a significant departure from accepted practices” and that is “committed knowingly, willingly or recklessly” (Research Misconduct, 2002). Interestingly, in September 2005 the ORI announced that it had contracted with The Gallup Organization to conduct a survey limited to measuring the extent of FFP.4 While it is true that the definition of misconduct as FFP describes “actions that are unambiguous, easily documented, and deserving of stern sanctions” (Cohen, 2005) and offers the added benefit of being similar to definitions used in other countries,5 we believe that policies intended to reduce misconduct must be informed by what researchers see as behaviors that hamper the production of trustworthy science.

Second, scientists’ reports on the types and effects of misbehavior in research serve to highlight a blind spot in the field of research ethics. It is a curious fact of the organization of intellectual life that research ethicists focus largely on the protection of human and animal subjects of research; the behavior of researchers (apart from their direct treatment of subjects) has yet to capture their imagination. This lack of interest may be the result of the ordinariness of misbehavior; we were told by one research ethicist that study of the poor behavior of researchers is far less intellectually stimulating than the conundrums of consent and conflicts of interest. It is also likely that exclusive focus on FFP limits interest in misconduct because, on average, the ORI sanctions only 13 individuals each year for these transgressions (Cohen, 2005). Listening to scientists allows us to learn whether research subjects are placed in jeopardy by behaviors other than FFP and if so, the nature and extent of those dangers.

Method

We conducted six focus groups, two at each of three major research universities. The universities represented public and private sectors and were geographically dispersed. At each institution, one focus group included only associate professors, and the other was made up of both assistant professors and postdoctoral fellows. A total of 51 scientists participated in the focus-group discussions, which lasted between 1.5 and 2 hours each. Participants were recruited from a wide range of academic disciplines in the biomedical, clinical, biological, and behavioral sciences; we recruited using departmental websites to generate personal email invitations. More scientists than could be accommodated volunteered to participate. We restricted the groups to no more than 10 participants, and, in order to minimize participants’ reluctance to discuss context-related issues, we constructed the groups in such a way that participants were from different academic departments. The groups represented considerable diversity in race/ethnicity, gender, and disciplinary affiliation. Our study was IRB approved.

Results

Nearly all of our subjects were aware of, and mentioned, the problems of FFP, but the majority considered violations of this sort relatively uncommon. One respondent noted:

“I think that [FFP is] a really small part … I think those kind of ethical issues we actually don’t deal with very often. But there are a lot of daily things that go on …”

Another respondent described a successful colleague who is not “terribly ethical,” pointing out that this person’s misconduct included only a “little bit of FFP;” she believed that the more troublesome behavior involved:

“… abusing … post docs, claiming things that—taking like credit, you know, like credit for lots of things that aren’t yours.”

Often we heard, “In my area, FFP is not the issue, it is …” followed by a description of a more mundane, everyday problem in the lab or with the research team. These more common, everyday problems fall into four categories:6 (1) the meaning of data, (2) the rules of science, (3) life with colleagues, (4) the pressures of production in science.

As we analyzed our focus group data, we came to realize that the everyday problems of scientists are often associated not just with ordinary human frailties,7 but with the difficulty of working on the frontier of knowledge. The use of new research techniques and the generation of new knowledge create difficult questions about the interpretation of data, the application of rules, and proper relationships with colleagues. Like other frontiersmen and—women, scientists are forced to improvise and negotiate standards of conduct. Nowhere is this more visible than in the difficulties scientists face in the handling of data.

The Meaning of Data

Our respondents were clearly worried about the quality of their own and their colleagues’ data but they were not overly concerned with data that are purposively manipulated. Rather they were troubled by problems with data that lie in what they see as a “gray area,” problems that arise from being too busy or from the difficulty of finding the line between “cleaning” data and “cooking” data. One scientist said:

“It’s a question of over-commitment. These famous people are so busy, I think they are mostly ethical, they don’t … violate FFP, but they don’t sit for an hour and talk to their students about keeping a lab notebook. In fact they probably, you know, don’t even look at the raw data, they just see the final figures and paper.”

And then there are the problems that arise when asked to confirm your data:

“Okay, you got the expected results three times on week one on the same preparation, and then you say, oh, great. And you go to publish it and the reviewer comes back and says, ‘I want a clearer picture,’ and you go and you redo it—guess what, you can’t replicate your own results… . Do you go ahead and try to take that to a different journal … or do you stop the publication altogether because you can’t duplicate your own results? … Was it false? Well, no, it wasn’t false one week, but maybe I can’t replicate it myself… there are a lot of choices that are gray choices… They’re not really falsification.”

It is not always easy for researchers to decide when uncorrected errors in the data become outright falsification. How do scientists actually clean their data? They often rely on their experience, cleaning out unanticipated findings and preserving what they “know” they would find:

“One gray area that I am fascinated by … is culling data based on your ‘experience.’ … there was one real famous episode in our field … [where] it was clear that some of the results had just been thrown out … [When] queried [the researchers] … said, ‘Well we have been doing this for 20 years, we know when we’ve got a spurious result …’ [When that happens] … Do you go back and double check it or do you just throw it out … [and] do you tell everybody you threw it out? I wonder how much of that goes on?”

One young scientist described the advice she was given by a more senior colleague:

“I was defending my master’s thesis and I was doing a poster presentation, and the external examiner came and had a look at some of my graphs. And he said, ‘You know, well I’d be much more convinced by your data if you’d chopped off the last two data points …’ I was like, well, I wasn’t sure that you could do that kind of thing (laughter) … for me it’s being honest about what you found and … my work may be more convincing had I lopped off the last two data points, but those two data points may be more interesting than something that has happened before.”

If the purpose of science is to generate new knowledge, the meaning of the new data generated in that quest will necessarily be difficult to discern, requiring interpretation, inference, the sorting of “good” and “bad” data, decisions about the use of controls and statistical tests. Scientists are aware of the opportunity to slant the data that these decisions afford and they remain unclear about best ways to make and report these decisions.

The Rules of Science

The work of scientists is increasingly governed by layers of rules intended to, among other things, protect animal and human subjects, to prevent misuse of grant funds, and to control the use of harmful materials. Our respondents noted that this plethora of rules—many of which they find to be unnecessary and intrusive—can actually generate misconduct:

“If you ask why are the rules being bent, it’s, in some cases, because too many rules have been implemented that obstruct you getting the necessary things done …. there get to be so many rules and you’re doing anything you can to dodge around those rules without totally stepping over the line … they implement more rules and then there’s more individuals that go, like, ‘This is a ridiculous rule, how do I get around that?’”

A case in point involves rules about mingling grant funds:

“For instance, you have the two grants. I have to buy two bottles of the same chemical because something bought by this NIH grant can’t be used for the project sponsored by other than NIH. So as many as you have grants, you have to have the same, yes, the same bottle of the same chemicals. And of course, you have to sign that ‘Yes, this came from the funds used for this project. That’s why I’m buying this.’ But of course I use it for something else.”

Scientists complained about the many requirements imposed by Institutional Review Boards (IRBs). A specialist in emergency medicine described a situation where the rules of informed consent seemed to work against good research:

“… half the physicians in our department will give Prochlorperazine for headache, half of them give Droperidol for headache … [We] have no reason to choose one over the other. So we want to do a study where we just compare the two. And … we have to get consent [even though] … the patient is going to get one of the medications anyway … And just by the fact that … you’re handing them a consent form and you’re saying they’re going to be in this study, they think, implicit in that is … some danger … otherwise, why would they be asking me? But clearly there’s no danger. They’re going to get one or the other drug anyway, and yet now they’re having to read three pages and sign a form, and so I’m sure there are people who try to get around that because it’s a ridiculous request.”

Life with Colleagues

Science is a social endeavor. Scientists must deal with their own and their colleagues’ frailties, and they must find ways to negotiate relationships and to sort out their responsibilities to each other. Several respondents commented on problems associated with accurate record keeping, not exactly misconduct in itself, but often implicated in cases of FFP:

“I think it’s really unfortunate that there’s so many rules about how we use radioactivity, how we use these animals, but there really aren’t many guidelines that train scientists on how to design experiments, how to keep notebooks. And it’s almost as if young scientists think that, ‘Oh, this is a pain, you know, let’s just do it and not think about it, and you’re just pestering me and you’re expecting too much.’ And it’s extremely frustrating as someone that’s running a lab.”

We also learned of problems with letters of recommendation. Most interesting was this report:

“There was one person that is a very famous scientist that I won’t name, who, when I was working in my post doc, had a ter . …—he has still has the reputation. If he liked you, if you were really good, he wrote you a lousy letter of recommendation so you would stay in his lab forever. If you got a good recommendation from this guy, you don’t want to hire this person, because he really wanted to get rid of them.”

Our respondents told us that good science requires tending to relationships:

“[N]ewcomers [can] … get on the wrong side of somebody like the chairperson, or head of the department with a lot of power and then there is trouble … we have to … navigate very carefully in order not to burn bridges, and derail long-term research projects … It is very complex.”

Following up on this comment, a researcher described how competition may corrupt relationships:

“… along those lines I think [we must be] aware … not to cut people out. It is like, go out of your way to include people that might have made any kind of contribution whatsoever … in my field in particular [there are] innumerable instances where people are cooperating well until something really spectacular is found. And then all of a sudden people are just lopped-off at the knees … literally on the day something was found, it just [starts] to crumble and … people just don’t speak to each other anymore, or [are] trying to block publications, just sort of a mess.”

Pressures of Production in Science

Like other occupations, science requires its practitioners to come up with tangible products. The pressure to produce—coupled with uncertainties about ownership of ideas, the proper way to assess scientific output (quantity or quality?), the management of competing interests, and the division of labor in research—is associated with a number of behaviors that do not quite reach the threshold of FFP but nevertheless are regarded by scientists as misconduct. The problems mentioned by members of our focus groups included: manipulation of the review system, (improper) control of research by funders, difficulties in assigning authorship, exploitation of junior colleagues, unreported conflicts of interest, the theft of ideas from conference papers and grant proposals, publishing the same thing twice (or more), withholding of data, and ignoring teaching responsibilities.

Several respondents had stories of junior colleagues who had been exploited. We heard many accounts of professors using the work of graduate students and post-docs without permission or attribution, of professors pitting post-docs against each other, and of post-docs being forced to sign agreements promising that they would never work in the area of their sponsoring professor.

This fear of competition from one’s students and post-docs highlights a structural dilemma in the training of scientists: to succeed in science it is important to attract the most talented graduate students and new PhDs, but these bright young researchers, once trained, become one’s competition:

“When I left my post doc, I was told, “don’t compete with me, you won’t win.” And you know it was a given that you wouldn’t, you wouldn’t win … for that particular person I knew up front that he was not an easy person. I knew it … There was no question, this is mine, and it is like signing a paper, “this is mine. I’ll teach you what I know, but any particular intellectual property, don’t mess with me.”

Ideas are the currency of science; our respondents expressed grave concern over having their ideas stolen:

“I’m always wary of submitting grants to study sections, because those people who sit on the study sections, it’s not unknown for them to take your ideas, kill your grant, and then take and do it. And I think all of us have either had that happen to them or know somebody who had that happen to them.”

The need to produce is often translated into the need to get funding. Our respondents worried about the kind of compromises they and their colleagues are forced to make simply to keep the funds flowing:

“For example, a particular study that I’m involved in is about drugs to … offset the effect of radiation … [The] company that makes [the] drug … does not want a certain control group in the study and will not fund the study if that control group is there … . there’s nothing illegal about [this], and I know for a fact it happens all the time and that’s the way it goes. It’s because government can’t pony up enough money to do all the clinical research that needs to get done. In this … study … the individual who’s going to be principal investigator is an untenured assistant professor … And you know, screwing around with this drug company, negotiating the study, has cost her a lot of time, and she, it’s going to make it harder for her to get tenure. And the pressure is clearly on her to knuckle under. I mean, she could have started that study months ago if she’d just said, sure, I’ll do whatever you want, give me the money.”

From Focus Groups to a National Sample

How can we be certain that the stories we heard in our focus groups accurately represent what is going on in laboratories and research centers in the United States? We had the rare opportunity to test our findings against data from a national sample of scientists: using what we learned in the focus groups, together with data from earlier studies, we developed a survey which we distributed to a sample of scientists funded by the NIH.8 We presented our respondents with a list of 33 misbehaviors ranging from the fairly innocuous (have you signed a form, letter, or report without reading it completely?) to the more serious (have you falsified or “cooked” research data?) and asked two questions:

  1. In your work, have you observed or had other direct evidence of any of the following behaviors among your professional colleagues, including postdoctoral associates, within the last three years?

  2. Please tell us if you yourself have engaged in any of these behaviors within the last three years?

Because reports of what others are doing are not a reliable measure of the incidence of a behavior—several respondents may report the same incident—we use self-reports to describe the prevalence of misbehavior. In a few places we do use respondents’ accounts of the behavior of their colleagues, but only to allow a glimpse of scientists’ perception of a behavior’s prevalence.

Our focus group data predicted well the responses from the national sample. For example, as reported by our focus group participants, we found FFP to be a minor problem (as indicated by self-report of these behaviors): just 0.3% of our respondents admitted to falsifying data, and 1.4% admitted to plagiarism. However, as shown in Table 1, struggles with the meaning of data and the rules of science were common. The difficulties of life with colleagues also were associated with misbehavior. In this category we present both self-report and report of colleague behaviors: notice that while our respondents report that they are not prone to exploit others, they perceive exploitation to be a common behavior of their colleagues. Finally, we found that the pressures of production influence the behavior of scientists. Here, too, we present both self-reports and reports of the behavior of colleagues for the purpose of showing the presence of an important perception of the extent of a behavior, a perception that influences the willingness of scientists to share ideas with others.

TABLE 1.

Percentage of national sample of scientists reporting forms of misconduct* (N = 3,247).

Category Item % Yes
Meaning of Data Dropping observations or data points from analyses based on a gut feeling that they were inaccurate 15.3
Inadequate record keeping related to research projects 27.5
Cutting corners in a hurry to complete a project 23.0
Rules of Science Ignoring minor details of materials-handling policies (biosafety, radioactive materials, etc.) 36.1
Using funds from one project to get work done on another project 51.7
Life with Colleagues Providing an overly positive or overly negative letter of recommendation 20.8
Using one’s position to exploit others: Self 1.6
 Colleagues 46.3
Pressures of Changing the design, methodology or results of a study in response to pressure from a funding source 15.5
Production in Science Withholding details of methodology or results in papers or proposals 10.8
Using another’s ideas without obtaining permission or giving due credit:
 Self 1.4
 Colleagues 45.7
*

Unless otherwise specified, percentages reflect “yes” answers to the question: “Please tell us if you yourself have engaged in any of these behaviors within the last three years?”

”Colleagues” indicates agreement with this statement: “I have observed or had other direct evidence of this behavior among my professional colleagues including postdoctoral associates, within the last three years.”

Conclusion: Normal Misbehavior

In his classic work, The Rules of the Sociological Method, Durkheim made the controversial argument that crime was “normal” (as opposed to “pathological”). He noted that although crime, like pain, “has nothing desirable about it,” nevertheless “it is a normal physiological function … it plays a useful and irreplaceable role in life” (Durkheim, 1982, p. 107). Our conversations with scientists lead us to conclude that a certain amount of “normal misbehavior” is common in the dynamic field of science. This is not to suggest that these behaviors should be condoned, but, following Durkheim, we see these behaviors as playing “a useful and irreplaceable role.”9

First, they allow scientists to deal with uncertainties about proper conduct that characterize work on the frontiers of knowledge, uncertainties that arise naturally when interpreting data, responding to rules, relating to colleagues, and establishing oneself as a professional scientist. As we have seen, scientists do find ways to live with the uncertainties of their work, but they express discomfort about the strategies they and their colleagues adopt.

Second, and equally significant, normal misbehaviors show us the “pinch points” in the organization of science. It is particularly important to notice that when scientists talk about behaviors that compromise the integrity of their work, they do not focus on FFP; rather they mention more mundane (and more common) transgressions, and they link these problems to the ambiguities and everyday demands of scientific research. When policymakers limit their concern to the prevention of infrequently occurring cases of FFP, they overlook the many ways scientists compromise their work in an effort to accommodate to the way science is funded and scientists are trained. Durkheim (1997a; 1997b) linked deviant behaviors to anomie, a condition of normlessness where the rules governing behavior are unclear or uncertain; Merton (1938) extended Durkheim’s work by calling attention to the way social structures create strain for certain individuals causing them to find novel, often deviant, ways to succeed. When we listen carefully to scientists’ discussions of wrongdoing it becomes clear that they are laboring in situations that place some researchers in situations of strain. Our focus group data demonstrate that any effort to reduce misbehavior and misconduct must pay attention to the nature of scientific work and to the internal processes of science (see also Martinson, Anderson & De Vries, 2006).

Finally, scientists’ conversations about normal misbehavior point to the need for policymakers and research ethicists to take seriously the extraordinary and ordinary conduct of researchers. Concern with the protection of the subjects of research can no longer be limited to the creation of better systems of surveillance and reporting. We are aware that mandated training in the “responsible conduct of research” (RCR) focuses on FFP and the normal misbehavior identified by our focus group participants, but the very ordinariness of the latter shields it from the attention of national policymakers and institutional officials. As Chubin noted in 1985, normal misbehavior can be a source of harm, not just to research subjects, but to the institutions that sponsor and conduct research and to the overall enterprise of science. When we look beyond FFP we discover that the way to better and more ethical research lies in understanding and addressing the causes of normal misbehavior. This is not a call for increased surveillance of the mundane work of researchers, a response that would create undue and problematic interference in the research process (Chubin, 1985, p. 86). Rather, the presence of normal misbehavior in science should direct attention to the social conditions that lead to both acceptable and unacceptable innovations on the frontiers of knowledge.

Acknowledgments

This research was support by the Research on Research Integrity Program, an ORI/NIH collaboration, grant # R01-NR08090. De Vries’ work on this project was also supported by grant # K01-AT000054-01 (NIH, National Center for Complementary & Alternative Medicine).

Footnotes

1

Although work in this area is beginning: see Al-Marzouki, Roberts, Marshall & Evans, 2005.

2

Brief reference to the findings presented in this manuscript were published in a “Science News” item written by Jim Giles—“Researchers break the rules in frustration at review boards,” Nature, 438(7065): 136-7.

3

The letter from the FASEB protesting the survey can be found at: http://www.faseb.org/opar/news/docs/ltr_11x12x2.pdf

4

In its September, 2005 newsletter, the ORI noted that the self-administered questionnaire, which will be sent to 5200 principal investigators funded by the U.S. Public Health Service, “incorporates extensive comments received from the AAMC and the FASEB” (“Research Misconduct Study to be Conducted by Gallup,” 2005). Subjects will be asked to report on research misconduct they have seen in their departments over the past three years, a data gathering strategy that has been sharply criticized as a poor measure of the incidence of misconduct (Anderson, 1993).

5

See, for example, the most recent definition used in the UK, promulgated by the Wellcome Trust: “The fabrication, falsification, plagiarism or deception in proposing, carrying out or reporting results of research or deliberate, dangerous or negligent deviations from accepted practices in carrying out research” (The Wellcome Trust, 2002). See also, Klarreich, 2001.

6

These categories correspond quite closely with those in a taxonomy created by Helton-Fauth et al., 2003; see page 209.

7

Chubin (1985, p. 81) offers a list of these including “careerism,” “sloppiness,” and “psychopathy.”

8

For a more complete description of this survey see Martinson, Anderson & De Vries, 2005.

9

Our assertion here also builds on Merton’s (1976) analysis of the social value of “ambivalence.” He pointed out that in many social organizations, including science, a “structure of social roles consist[ing] of arrangements of norms and counter norms [has] evolved to provide the flexibility of normatively acceptable behavior required to deal with changing states of a social relation” (p. 31).

Contributor Information

Raymond De Vries, University of Michigan.

Melissa S. Anderson, University of Minnesota

Brian C. Martinson, HealthPartners Research Foundation

References

  1. Al-Marzouki S, Roberts I, Marshall T, Evans S. The effect of scientific misconduct on the results of clinical trials: A Delphi study. Contemporary Clinical Trials. 2005;26:331–337. doi: 10.1016/j.cct.2005.01.011. [DOI] [PubMed] [Google Scholar]
  2. Argetsinger A. Panel blames Hopkins in research death. The Washington Post. 2001 August 30;:B03. [Google Scholar]
  3. Chang K. Panel says Bell scientists faked discoveries in physics. New York Times. 2002 September 26;:A1. [Google Scholar]
  4. Chong S, Normile D. How young Korean researchers helped unearth a scandal. Science. 2006;311:22–23. doi: 10.1126/science.311.5757.22. 25. [DOI] [PubMed] [Google Scholar]
  5. Chubin DE. Research malpractice. BioScience. 1985;35:80–89. [Google Scholar]
  6. C ohen, J. (2005). A word from the president: “Research integrity is job one.” AAMC Reporter September, http://www.aamc.org/newsroom/reporter/sept05/word.htm (Accessed 6 January 2006).
  7. Couzin J. And how the problems eluded peer reviewers and editors. Science. 2006;311:23–24. doi: 10.1126/science.311.5757.23. [DOI] [PubMed] [Google Scholar]
  8. D urkheim, E. (1982) [1895]. The Rules of the Sociological Method New York: Free Press.
  9. D urkheim, E. (1997a) [1893]. The Division of Labor in Society New York: Free Press.
  10. D urkheim, E. (1997b) [1897]. Suicide New York: Free Press.
  11. E vans, D., Smith, M. & Willen, L. (2005). Drug industry human testing masks death, injury, compliant FDA. Bloomberg Report, November 2, http://quote.bloomberg.com/apps/news?pid=10000006&sid=aspHJ_sFen1s&refer=home# (Accessed 6 January 2006).
  12. Helton-Fauth W, Gaddis B, Scoot G, Mumford M, Devenport L, Connelley S, et al. A new approach to assessing ethical conduct in scientific work. Accountability in Research. 2003;10:205–228. doi: 10.1080/714906104. [DOI] [PubMed] [Google Scholar]
  13. Holden C. Planned misconduct surveys meet stiff resistance. Science. 2002;298:1549. doi: 10.1126/science.298.5598.1549. [DOI] [PubMed] [Google Scholar]
  14. Kintisch E. Researcher faces prison for fraud in NIH grant applications and papers. Science. 2005;307:1851. doi: 10.1126/science.307.5717.1851a. [DOI] [PubMed] [Google Scholar]
  15. Klarreich E. Wellcome Trust sets out fresh misconduct standards. Nature. 2001;412:667. doi: 10.1038/35089196. [DOI] [PubMed] [Google Scholar]
  16. Kolata G. Amid confusion, journal retracts Korean’s stem cell paper. New York Times. 2005 December 31;:A8. [Google Scholar]
  17. Martinson BC, Anderson MS, De Vries RG. Scientists behaving badly. Nature. 2005;435:737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
  18. Martinson BC, Anderson MS, De Vries RG. Scientists’ perceptions of organizational justice and self-reported misbehaviors. Journal of Empirical Research on Human Research Ethics. 2006;1(1):51–66. doi: 10.1525/jer.2006.1.1.51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Meier B. Implant program for heart device was a sales spur. New York Times. 2005 September 27;:A1. [PubMed] [Google Scholar]
  20. Merton RK. Social structure and anomie. American Sociological Review. 1938;3:672–682. [Google Scholar]
  21. M erton, R. K. (1976). Sociological ambivalence and other essays New York: The Free Press.
  22. Research misconduct study to be conducted by Gallup. Office of Research Integrity . Newsletter. 2005 September;13(2):4. [Google Scholar]
  23. Stolberg S. Biomedicine is receiving new scrutiny as scientists become entrepreneurs. New York Times. 2000 February 20;:A26. [Google Scholar]
  24. Wade N. A new look at old data may discredit a theory on race. New York Times. 2002 October 8;:F3. [Google Scholar]
  25. Wellcome Trust 2002. Statement on the handling of allegations of research misconduct London, UK. Available at: http://www.wellcome.ac.uk/doc_WTD002756.html Last accessed March 5, 2006.
  26. Wysocki B. Cash injection: As universities get billions in grants, some see abuses. Wall Street Journal. 2005 August 16;:A1. [Google Scholar]

RESOURCES