Skip to main content
Journal of the Royal Society of Medicine logoLink to Journal of the Royal Society of Medicine
editorial
. 2004 Jun;97(6):259–261. doi: 10.1258/jrsm.97.6.259

The policing of science

Neville W Goodman 1
PMCID: PMC1079486  PMID: 15173324

‘Confucius told his disciple Tsze-kung that three things are needed for government: weapons, food and trust. If a ruler can't hold on to all three, he should give up the weapons first and the food next. Trust should be guarded to the end: “without trust we cannot stand”. Confucius' thought still convinces.’

A colleague asked my advice. Where could he find written evidence of our Trust's published standards of good research practice, and of our procedure for the investigation of alleged research misconduct? He needed it to fill in a research grant application. No group of humans is completely honest: dishonest window-cleaners steal DVD-players; dishonest scientists invent data. Both—if not caught—profit from their dishonesty. There is no question that we need mechanisms for making dishonest research less likely and for investigating cases that arise, but how much, at what stage, and who should do it?

Though not the first time the motives of researchers had been impugned, an important stimulus to thinking about research fraud was the publication of Betrayers of the Truth in the early 1980s.1 Stephen Lock, former editor of the BMJ, raised the matter on many occasions,2,3 and was especially critical of the UK's reluctance to do anything but talk.4 He favoured the Scandinavian (Nordic) approach5—formal teaching of research method to tackle the ‘jerks’, and proper mechanisms to tackle the ‘crooks’. But despite lots more talking, the formation of COPE (the Committee for Publication Ethics) by a group of editors, and the setting up of ‘research governance’,6 official mechanisms for preventing and detecting research fraud are not evident in the UK. Research governance puts the direct onus for ensuring scientific integrity on the sponsor of the research, who ultimately is the person signing the box on the application form that goes to the research ethics committee; indirect onus falls on the institution to provide research training and a proper research environment. Looking around at trainees doing research in my institution, I am not convinced that many have yet had much formal research training, although we do have an active R&D department that advertises frequent seminars on all sorts of research issues. Research governance requires institutions to have ‘in place systems to detect and address fraud, and other scientific or professional misconduct by their staff’, but what systems?

The cardiologist Peter Wilmshurst, who offers horrifying accounts of research malpractice,7 asks for random external checks of data, analogous to dope testing in sport; and he takes editors to task8 for worrying more about libel than about harm to patients from their published papers. However, as John Garrow points out,9 random sampling of data is not analogous to dope testing, which is simply a matter of taking a blood or urine sample—and even then can be disputed. On editors, I think Wilmshurst tilts at the wrong target: it is not the editors but the English libel laws that are at fault. Without a change in these, no amount of policing of research will prevent unpleasant and expensive libel actions by suitably cynical drug companies.10

Wilmshurst wants external checks, but who will do them? And what will their effect, and their unintended consequences, be? Most of those who criticized Betrayers of the Truth complained that it exaggerated the prevalence of research fraud, a prevalence we still do not know. The Nordic experience is that serious fraud, requiring a hearing by a formal committee, is rare (or, at least, it was in the mid-1990s5). Wilmshurst's experiences would undoubtedly have needed a hearing, but a low prevalence of serious fraud would give random external checks a low signal-to-noise ratio. In anything other than a straightforward clinical study with easily measured outcomes, effective checks might also be difficult. How, unless they suspected it, would the inspectors know that a western blot was fraudulent, or a digital photograph had been manipulated?11

Underlying all this is the issue of trust. At heart, are we to trust medical researchers or to mistrust them? Should we assume that most researchers are truthful but accept that some are knaves and go after them; or believe that most researchers are on the make and lay suspicion on everyone? You could argue that the requirement for ethical review of research already answers this question, but some ethical issues are genuinely difficult. Most issues of research misbehaviour—certainly the grosser ones—are not difficult. No one could argue that it is acceptable to invent patients or steal data.

Wilmshurst writes from his experience, in which institutions refused to act and editors avoided retraction.12 Nonetheless, my preferred solution is that institutions have to act, and editors have to retract. Whistleblowers have to be listened to, and data must be readily available when asked for, whether on whistleblowers', editors' or referees' suspicions. External random checking has too many flaws. Wilmshurst feels that institutions can no longer be trusted to deal with misconduct. If he is saying that we can no longer trust our hospitals and universities then who are we to trust? In that world, research is an activity no longer worth pursuing, and honest editors might as well give up.

The quotation at the head of this editorial is the opening of Onora O'Neill's 2002 Reith Lectures.13 Her introduction continues,

‘It isn't only rulers and governments who prize and need trust. Each of us and every profession and every institution needs trust. We need it because we have to be able to rely on others acting as they say that they will, and because we need others to accept that we will act as we say we will. The sociologist Niklas Luhman was right that “A complete absence of trust would prevent [one] even getting up in the morning”.’

It is hard on the whistleblowers14 but I agree with Garrow9 that they are the key. When research governance asks for ‘systems in place’, they must be systems to support and investigate suspicion rather than systems that go out looking with suspicion. It will always be difficult. In the criminal courts, witnesses are threatened. Criminals rarely put up their hands and say, ‘You got me bang to rights there, guv’. Some of the more public cases of research misconduct have taken years to sort out, and have left almost everybody with mud on them somewhere, including the investigating agencies. Here are some titles, in chronological order, of articles written about the ‘Baltimore affair’: 1992, ‘A final frenzy for landmark cases?’; 1994, ‘ORI finds Imanishi-Kari guilty of misconduct, proposes 10-year ban’; 1996, ‘Clearing of researcher in “Baltimore affair” boosts demands for reforms’. A review of a book about the case, which Lock reckoned would make a marvellous soap opera,15 began, ‘Few of us, I dare say, had the stamina to follow the Baltimore affair properly’. It ended, ‘If any good has emerged, it is the wider knowledge that science is “full of ambiguous results, unexplained anomalies, imprecise assays... ”.’16

It is not just medicine that has problems with dishonest research. Jan Hendrik Schön was a prolific physicist who published papers in respected journals including Nature.17 His remarkable output was partly his undoing, and Nature retracted all his papers. Yung Park was a materials scientist. Some of his fraudulent activities occurred while he was a visitor at Cambridge University. True to the British model, the academics were slow to respond, Nature accusing the University of ‘acting as if it didn't happen at all’.18 Even the world of fossils has its fakers.19 But it is somehow a double blow to know that medical research is fraudulent, because such fraud more demonstrably harms people. Thus, the media take more interest, which makes medical research fraud seem more common—though it has a larger denominator. In Denmark, which has three subcommittees for investigating scientific dishonesty, most of the cases come from medicine, but in that country medical research is as big a field as all the other disciplines put together.20

There are all sorts of other issues, which many others have written about—why science fraud occurs in the first place, the endless discussions of how to define fraud—but I shall not go into these here. The position I take has inconsistencies, but as with all complex problems, and many simple ones too, there is no perfect solution. Partly my position is one of naivety. Like Richard Feynman I look for a super-honesty whereby researchers freely admit the flaws in their own work: research should be the ultimate intellectual challenge; cheating at research is like cheating at solitaire. If research is not honest then it is nothing; an athlete who runs faster on steroids has at least run faster. A decade ago, Petr Skrabanek was pessimistic that any remedies could have any effect in a system in which ‘ “positive” results are rewarded, in which wishful thinking has displaced critical inquiry, and in which lip-service is paid to “research” by the authorities who lack the intellectual and moral discipline of rigorous scientific standards.’21 Maybe research is just capitalism in microcosm,22 and we must learn to live by it as the least bad system, knowing that truth generally and eventually will out—though the cost is inpatients' wellbeing, if not their lives.

So how should I answer my colleague? For our Trust to sponsor and support research, we must sign up to research governance: there are forms to be completed. If the Department of Health is satisfied that we are a fit organization, that should be good enough for any grant-awarding body. As for our procedure for investigating misconduct, where is the evidence that we need local procedures specifically for research misconduct when we already have procedures for general professional misconduct? An editorial asked, ‘Please, men and women in gowns, do something’.23 Until they do and there is some central support mechanism for alleged research misconduct, it should not be for each and every research establishment to have its fraud investigators: one known case of fraud in 20 years24 seems a poor return, and gives little chance of refining the skills of investigation that will need knowledge, patience, and tact.

References


Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press

RESOURCES