Skip to main content
EMBO Reports logoLink to EMBO Reports
. 2010 Jun 11;11(7):493–499. doi: 10.1038/embor.2010.84

Conspiracy theories in science

Ted Goertzel 1
PMCID: PMC2897118  PMID: 20539311

Although few conspiracy theories target the natural sciences, they can have severe effects on public health or environmental policies. Ted Goertzel explains how scientists could help to prevent and mitigate potentially harmful conspiracy theories.


Conspiracy theories that target specific research can have serious consequences for public health and environmental policies


Conspiracy theories are easy to propagate and difficult to refute. Fortunately, until a decade or so ago, few serious conspiracy theories haunted the natural sciences. More recently, however, conspiracy theories have begun to gain ground and, in some cases, have struck a chord with a public already mistrustful of science and government. Conspiracy theorists—some of them scientifically trained—have claimed that the HIV virus is not the cause of AIDS, that global warming is a manipulative hoax and that vaccines and genetically modified foods are unsafe. These claims have already caused serious consequences: misguided public health policies, resistance to energy conservation and alternative energy, and dropping vaccination rates.

Responding to conspiracy theories and ‘sceptics' draws scientists into arenas where objective information matters less than emotional appeals…

Responding to conspiracy theories and ‘sceptics' draws scientists into arenas where objective information matters less than emotional appeals, unsupported allegations and unverified speculations. Scientists are understandably reluctant to get bogged down in such debates, but they are sometimes unavoidable when scientists need to voice their concerns in the public arena. It is thus both helpful and important to understand the logic of conspiracy arguments and the best ways to respond to them.

‘Conspiracy' is an essentially contested rhetorical concept that people apply to different events depending on their point of view (Gallie, 1964). It is almost always pejorative. The Oxford English Dictionary defines conspiracy quite loosely as “an agreement between two or more persons to do something criminal, illegal or reprehensible”. While the law can precisely define the criminal act in any conspiracy, ‘reprehensible' is in the eye of the beholder. When Hillary Clinton protested that her husband US President Bill Clinton was the victim of a “vast right-wing conspiracy” and US President Lyndon B. Johnson accused the media and liberal activists of a “conspiracy” to oppose his Vietnam War policies, they were intentionally vague as to whether they referred to illegal or merely reprehensible behaviour (Kramer & Gavreili, 2005). Calling something a conspiracy makes it sound much worse than just saying, “people are ganging up on me.”

Invoking the word conspiracy also implies that something is secret and hidden. Pigden (2006) defines a conspiracy as “a secret plan on the part of a group to influence events in part by covert action”. Conspiracies so defined certainly do take place; they are not necessarily a figment of anyone's imagination. These include the failed conspiracy to assassinate Adolf Hitler, the September 11 attacks and the Watergate conspiracy. However, in history and social science, the term ‘conspiracy theory' usually refers to claims that important events were caused by conspiracies that have heretofore remained undiscovered (Coady, 2006). The claim that the World Trade Center was destroyed by al-Qaeda would not be a conspiracy theory in this sense, but the claim that it was bombed by Israeli agents, or that the American authorities knew about it in advance, would be.

A conspiracy theory gives believers someone tangible to blame for their perceived predicament, instead of blaming it on impersonal or abstract social forces

In the realm of science, the ‘climategate' scandal that has dogged the University of East Anglia's Climatic Research Unit (CRU; Norwich, UK) has seen the word conspiracy thrown about on both sides of the argument. Climate change ‘sceptics' have accused Professor Phil Jones of conspiring with his collaborators to manipulate climate data and the scientific literature, while supporters of the CRU have pointed out that the hacking of the e-mails and the selective, pejorative quoting of their content was a conspiracy to discredit the scientific evidence for climate disruption.

Historians and social scientists are generally sceptical of conspiracy theories because they believe that most conspiracies fail and that historical events can be better understood without recourse to unverifiable speculation (Keeley, 2006). Nevertheless, conspiracy theories can get a firm hold among the public at large and their influence seems to be spreading. To understand this success, it is useful to think of conspiracy theorizing as a ‘meme', a cultural invention that passes from one mind to another and survives, or dies out, through natural selection (Dawkins, 1976). As rhetorical devices, conspiracy theories compete with memes such as ‘fair debate', ‘scientific expertise' and ‘resistance to orthodoxy'.

Conspiracy theories appeal to people who are discontented with the established institutions of their society and especially with elites in that society. They are likely to believe that conditions are worsening for people like themselves and that the authorities do not care about them. A conspiracy theory gives believers someone tangible to blame for their perceived predicament, instead of blaming it on impersonal or abstract social forces. The meme becomes a habit of thought: the more people believe in one conspiracy, the more likely they are to believe in others (Goertzel, 1994; Kramer, 1998).

The logic of the conspiracy meme is to question everything the ‘establishment'—be it government or scientists—says or does, even on the most hypothetical and speculative grounds, and to demand immediate, comprehensive and definitive answers to all questions. A failure to give convincing answers is then used as proof of conspiratorial deception. Meanwhile, conspiracy theorists offer their own alternative theories with the flimsiest of evidence, challenging the authorities to prove them wrong.

Of 92 conspiracy theories described in a recent handbook (McConnachie & Tudge, 2008), most targeted political, religious, military, diplomatic or economic elites. These ranged from Tutankhamun and the curse of the Pharaoh to the Protocols of the Elders of Zion, from satanic ritual abuse to the alleged scheming of the Council on Foreign Relations, the Trilateral Commission and the British Royal family. Others involved religious cults, alien abductions or terrorist plots. Some are just amusing, but others fuelled wars, inquisitions and genocides in which millions of people died.

How can we distinguish between the amusing eccentrics, the honestly misguided, the avaricious litigants and the serious sceptics questioning a premature consensus?

The scientific and technological conspiracies listed in the handbook mostly allege the misuse of science by government, the military and large corporations. These include bizarre claims that the military suppressed technology that could make warships invisible; automobile and oil companies have hidden technology that would turn water into gasoline; the military is secretly in cahoots with space aliens; the HIV virus was created deliberately as part of a plot to kill black or gay people; and that dentists seek to poison Americans by putting fluoride in public water supplies. Others claim that corporate officers and public health officials suppressed evidence that preservatives in vaccines cause autism and that silicone breast implants cause connective tissue disease (Specter, 2009; Wallace, 2009).

Other conspiracy theories include claims that a major drug company hid reports that its leading anti-inflammatory drug caused heart attacks and strokes (Specter, 2009); environmental scientists have conspired to keep refereed journals from publishing papers by researchers sceptical that global warming is a crisis (Hayward, 2009; Revkin, 2009); physicians or drug companies have conspired to suppress non-mainstream medical treatments, vitamins and health foods; and that big business and the medical establishment have conspired to obstruct the search for a cure for AIDS so they can continue to sell their ineffective drugs and treatments (Nussbaum, 1990).

Many of these theories are clearly absurd, but some have a veneer of possibility. How can we distinguish between the amusing eccentrics, the honestly misguided, the avaricious litigants and the serious sceptics questioning a premature consensus? No private individual has the time or the expertise to examine the original research literature on each topic, so it is important to have some guidelines for deciding which theories are plausible enough to merit serious examination.

One valuable guideline is to look for cascade logic in conspiracy arguments (Susstein & Vermeule, 2008). This occurs when defenders of one conspiracy theory find it necessary to implicate more and more people whose failure to discover or reveal the conspiracy can only be explained by their alleged complicity. Another guideline is to look for exaggerated claims about the power of the conspirators: claims that are needed to explain how they were able to intimidate so many people and cover their tracks so well. The more vast and more powerful the alleged conspiracy, the less likely that it could have remained undiscovered.

For example, the claim that the moon landing in 1969 was a hoax implies the complicity of thousands of American scientists and technicians as well as of Soviet astronomers and others around the world who tracked the event. It is incredibly implausible that such a conspiracy could have held together. On the contrary, the theory that a few individuals in Richard Nixon's campaign conspired to break into their opponents' offices in the Watergate building was plausible and proved worth investigating and was, indeed, true.

Even if a conspiracy theory is implausible, it can be used as a rhetorical device to appeal to the emotions of a significant public

Even if a conspiracy theory is implausible, it can be used as a rhetorical device to appeal to the emotions of a significant public. The conspiracy meme flourishes best in politics, religion and journalism, in which practitioners can succeed by attracting followers from the general public. These practitioners might actually believe the conspiracy theory, or they might simply use it to win public support.

As long as scientists keep away from politics and controversial social issues, they are largely immune to conspiracy theories because success in scientific careers comes from winning grant applications and publishing significant findings in peer-reviewed journals. Attacking other scientists as conspirators would not be helpful for the careers of most scientists, no matter how frustrated they might be with referees, editors, colleagues or administrators who turn down their manuscripts or grant proposals, or deny them tenure. But conspiracy theories can be useful for scientists who are so far out of the mainstream in their field that they seek to appeal to alternative funding sources or publication outlets. They also might occasionally surface when a scientist's mental health deteriorates to the point that he or she loses touch with reality.

Conspiracy theories are dangerous when the meme is used to discredit scientific evidence in a public forum or in a legal proceeding. The conspiracy meme is part of the standard repertoire of memes used by lawyers to discredit evidence offered by ‘experts' of all kinds. Lawyers focus on the motivations of the experts, on who hired them, what they are being paid for their testimony and so on. They also seek out an ‘expert' who will testify on their side, implying that expertise is for sale to the highest bidder and that opinion is divided on the issue in question.Inline graphic

Conspiracy theories about vaccines were given a tremendous boost, especially in the UK, when The Lancet published a study reporting a hypothesized link between the measles–mumps–rubella vaccine and autism (Burgess et al, 2006). The media highlighted the story, despite the study's very small sample size and speculative causal inferences, and the public reaction was much larger than the medical and public health authorities anticipated. The reasons for the public reaction included resentment of pressure on parents, distrust of medical authorities and the potentially catastrophic nature of possible risk for a vulnerable population. The result was a decline in the proportion of parents having their children vaccinated and a subsequent increase in disease. While the authorities responded by citing findings from large epidemiological studies, much of the press coverage highlighted anecdotal accounts and human-interest stories. The recovery of public confidence in vaccination might have been due more to revelations of a conflict of interest on the part of the physician who published the original article—which was eventually withdrawn by the journal—than to the overwhelming evidence for the lack of a relationship between vaccination and autism rates.

Conspiracy theorists typically overlook lapses by their supporters but are quick to pounce on any flaw on the part of their opponents. When a leading Danish vaccine researcher was accused of stealing funds from his university, the vaccine conspiracy theorists pounced. Robert F. Kennedy, Jr, son of a former US Attorney General, used the occasion to denounce the “vaccine cover-up” on the influential blog The Huffington Post (Kennedy, 2010). He explained away the research findings on vaccination and autism on the grounds that there had been a change in Danish law and the opening of a new autism clinic. He criticized vaccine researchers for receiving money from the US Centers for Disease Control (CDC) for their studies, and for “being in cahoots with CDC officials intent on fraudulently cherry-picking facts to prove vaccine safety”. Of course, if the CDC had not funded this research, largely in response to popular concerns, vaccine opponents would have denounced them for not doing so.

Public alarm about genetically modified (GM) foods was heightened when a scientist, Árpád Pusztai, claimed in a television interview that rats had suffered intestinal damage from GM potatoes. His finding was preliminary—there were six rats in each group, fed only for 10 days, and the effects reported were minor—but the study received tremendous publicity because it fuelled fears about the safety of GM crops that had long been cultivated by environmentalist and anti-capitalist social movements. As the controversy progressed, questions were raised about the integrity of the study, leading to Pusztai's departure from his research institute. Nevertheless, anti-GM activists denounced criticisms of the research as a conspiracy and circulated among scientists a petition supporting Pusztai's rights. Finally, The Lancet published his study, which had yet to actually appear in a refereed journal. The editors sent it to six reviewers, only one of whom opposed publication. But, one of the reviewers who favoured publication said that he “deemed the study flawed but favoured publication to avoid suspicions of a conspiracy against Pusztai and to give colleagues a chance to see the data for themselves” (Enserink, 1999).

By releasing his findings on television, Pusztai received extraordinary attention for a study that might otherwise never have been accepted by a scientific journal. At least, that was the opinion of the editor of a competing journal who commented, “When was the last time [The Lancet] published a rat study that was uninterpretable? This is really lowering the bar” (Enserink, 1999). Releasing controversial findings on the internet or through press releases is justified as a way of making important discoveries available quickly, but it also serves to circumvent the normal scientific review process. Sometimes these ‘findings', such as the claim that the decline in crime in the USA in the 1990s was due to the legalization of abortion in the 1970s, become part of the conventional wisdom before other scientists have a chance to debunk them (Zimring, 2006).

Dissenters from mainstream science often invoke a meme that there are two sides to every question and each side is entitled to equal time to present its case. George W. Bush famously suggested that students be taught both evolution and “intelligent design” theories so that they could judge which had the most convincing argument (Baker & Slevin, 2005). Similarly, climate change ‘sceptics' demand equal air time for their side of the argument and, at least in the beginning, the media were more than willing to grant it in the interest of ‘balance'. If these dissenters or ‘revisionists' succeed in getting an opportunity to present their case, they hammer away at any gaps or contradictions in the evidence presented by mainstream researchers, using rhetoric that questions their motivations, while avoiding any hint of weakness or bias in their own case.

Conspiracy theories are dangerous when the meme is used to discredit scientific evidence in a public forum or in a legal proceeding

This advocacy meme is used widely in law courts and political debates and it can work well when the question at hand is one of taste or morality. It does not work well for scientists because there are objective right and wrong answers to most scientific questions. US Admiral William Leahy might have won a classroom debate in 1945 with his famous statement that “the [atomic] bomb will never go off, and I speak as an expert on explosives”, but scientists would find it hard to win a debate with the claim, “GM crops are safe to eat, and I speak as an expert on genes.” Nevertheless, in deciding to pursue the atomic bomb project, US President Harry Truman relied on scientific evidence, another powerful meme in Western societies. Decision-makers and the general public are most likely to be persuaded by this meme when scientists are in agreement and when their advice and policy prescriptions have a good track record.

Social scientists have forfeited much of their potential influence because they are too often perceived as advocates for a cause rather than as objective researchers. The ability to predict policy outcomes is very limited, yet social scientists sometimes fall into the trap of claiming to know more than they really do. Econometricians have been in the habit of publishing conflicting analyses of the relationship between capital punishment and homicide rates for decades without making any real progress, yet they continue to advocate for or against the death penalty (Goertzel & Goertzel, 2008). When President Clinton proposed a welfare reform in the USA, social scientists specializing in the topic almost universally predicted that a disastrous increase in poverty and hunger would result. In some cases they defended their predictions with elaborate statistical models, despite the fact that these models had no demonstrated track record for predicting trends in poverty (Goertzel, 1998). President Clinton deferred to politicians and conservative activists who predicted that poverty and dependency would decline as, in fact, they did.

The conflict between the debating meme and the scientific expertise meme was pronounced in the dispute between Nature editor John Maddox and biologist Peter Duesberg, who opposes the theory that HIV causes AIDS. Relying on the norms of fairness in debate, Duesberg (1995) sought the right to reply to scientific papers defending mainstream views. At a certain point in the debate Maddox refused to continue to give him the right of reply, arguing that Duesberg had “forfeited the right to expect answers by his rhetorical technique. Questions left unanswered for more than about ten minutes he takes as further proof that HIV is not the cause of AIDS. Evidence that contradicts his alternative drug hypothesis is on the other hand brushed aside.” Maddox argued that Duesberg was not asking legitimate scientific questions, but making demands and implying or saying: “Unless you can answer this, and right now, your belief that HIV causes AIDS is wrong” (Maddox, 1993).

Conspiracy theorists typically overlook lapses by their supporters but are quick to pounce on any flaw on the part of their opponents

Maddox observed that “Duesberg will not be alone in protesting that this is merely a recipe for suppressing challenges to received wisdom. So it can be. But Nature will not so use it. Instead, what Duesberg continues to say about the causation of AIDS will be reported in the general interest. When he offers a text for publication that can be authenticated, it will if possible be published.” As an editor of a scientific journal, Maddox was justified in saying that he would publish papers that offered new findings, not ones that just picked at unanswered questions in other people's work. But he was realistic in realizing that his refusal to publish additional comments by Duesberg would be portrayed as censorship by believers in the AIDS conspiracy theory.

Duesberg and other dissenters also rely on another well-established rhetorical meme to advance their cause, that of the courageous independent scientist resisting orthodoxy. This meme is frequently introduced with the example of Galileo's defence of the heliocentric model of the solar system against the orthodoxy of the Catholic Church. And there are other cases of dissenting scientists who have been proven right. Thomas Gold (1989) reports confronting the “herd mentality” of science in advancing his theories of the mechanisms of the inner ear and of the nature of pulsars as rotating neutron stars, both of which later came to be accepted. But being a dissenter from orthodoxy is not difficult; the hard part is actually having a better theory. Publishing dissenting theories is important when they are backed by plausible evidence, but this does not mean giving critics ‘equal time' to dissent from every finding by a mainstream scientist.

In his response to Duesberg, Maddox refers to the philosophical argument, associated with Karl Popper (1902–1994), that science progresses through falsification of hypotheses. He says, “True, good theories (pace Popper) are falsifiable theories, and a single falsification will bring a good theory crashing down.” But he goes on in the next sentence to rely implicitly on a different philosophy of science, often associated with the work of Imre Lakatos (1922–1974), which says science normally progresses by correcting and adding to ongoing research programmes, not by abandoning them every time a hypothesis fails. Maddox says, “unanswered questions are not falsifications; rather, they should be the stimulants of further research.”

Scientists do change their ideas in response to new evidence, perhaps more often than people in most walks of life. Linus Pauling abandoned his triple-helix model of DNA as soon as he saw the evidence for the double-helix model. But he never abandoned his advocacy for vitamin C as a treatment for the common cold and cancer, no matter how many studies failed to show a significant difference between experimental and control groups. He found flaws in each study's research design and insisted that the results would be different if only the study were done differently. He never did any empirical research on vitamin C, research that would have risked failing to confirm his hypotheses, but limited himself to debunking published scientific studies. Unfortunately, he is probably better known by the general public for this work than for his undisputed and fundamental contributions to chemistry. Pauling's scientific prestige lent credibility to those who sought to discredit scientific medicine as a conspiracy of doctors and drug companies (Goertzel & Goertzel, 1995). Scientific expertise is usually quite specialized, and scientists who advocate for political causes only tangentially related to their area of specialization have no special claim on the truth.

…allowing the conspiracy theorists to dominate the public debate can have tragic consequences

Conspiracists often seem to believe that they can prove a scientific theory wrong by finding a flaw or gap in its evidence. Then they claim conspiracy when scientists endeavour to fix the flaw or fill the gap, or even persist in their work on the assumption that a solution will be found. In fact, the occasions when an entire scientific theory is overthrown by a negative finding are few and far between. This is especially true in fields that depend on statistical modelling of complex phenomena, in which there are often several models that are roughly equally good (or bad), and where the choice of a data set and decisions about data set filtering are often critical. The more important test of a research programme is whether progress is being made over a period of time, and whether better progress could be made with an alternative approach. Progress can be measured by the accumulation of a solid, verifiable body of knowledge with a very high probability of being correct (Franklin, 2009).

The conspiracy meme has been especially prominent in the debate about global warming. When the Intergovernmental Panel on Climate Change published its report in 1996, an eminent retired physicist, Frederick Seitz (1996), accused them of a “major deception on global warming” on the op-ed pages of The Wall Street Journal. Seitz did not try to make a scientific argument that the report's conclusions were wrong. Instead, he attacked the committee's procedure in editing its document, accusing the editors of violating their own rules by rewording and rearranging parts of the text to obscure the views of sceptical scientists. This seemingly obscure point about the editing of a United Nations technical document proved remarkably effective in providing a rallying point for opponents of the report's conclusions.

A careful review of the incident concluded that the editors did not violate any of their own rules and that the editorial changes were reasonable (Lahsen, 1999). Editors, after all, do edit texts. The sceptical arguments were not deleted from the report; they were repositioned and rephrased, perhaps given less emphasis than Seitz thought they deserved. But the conspiracy meme was successful in shifting much of the public debate from the substance of the issue to criticism of personalities, procedures and motivations. The climate scientists felt attacked and apparently began to think of themselves more as activists under siege than as neutral scientists. In 2009, computer hackers released private e-mails seemingly showing that some climate scientists had pressured editors not to publish papers by sceptics and that they had looked for ways to present their data in such a way as to reinforce their advocacy views (Revkin, 2009; Hayward, 2009; Broder, 2010).

Climate science is heavily dependent on complex statistical models based on limited data, so it is not surprising that models based on different assumptions give differing results (Schmidt & Amman, 2005). In presenting their data, some scientists were apparently too quick to smooth trends into a ‘hockey stick' model that fitted with their advocacy concerns. Several different groups of well-qualified specialists have now been over the data carefully, and the result is a less linear ‘hockey stick' with a rise in temperature during a ‘medieval warm period' and a drop during a ‘little ice age'. But the sharp increase in warming in the twentieth century, which is the main point of the analysis, is still there.

Opposition rooted in religious or ideological concerns is acceptable as part of the democratic political process, but it need not prevent scientists from reaching a consensus…

This is not the place to review the substance of the issue, although there seems to be more consensus than the political rhetoric would lead one to assume. One of the more responsible critics concedes that “climate change is a genuine phenomenon, and there is a non-trivial risk of major consequences in the future” (Hayward, 2009). But there is no consensus on how high the risk is, or how soon it is likely to materialize. The less responsible critics simply dismiss the issue as a hoax and focus exclusively on the weaknesses and peccadilloes of the other side. When the climate scientists gave the conspiracy theorists an opening by letting their advocacy colour their science, the legitimacy of their enterprise was compromised and, ironically, the political movement itself was weakened. This is especially unfortunate when the underlying science is fundamentally correct.

Faced with assaults on their professional credibility, scientists might be tempted to retreat from the world of public policy. But allowing the conspiracy theorists to dominate the public debate can have tragic consequences. Fear of science and belief in conspiracies has led British parents to expose their children to life-threatening diseases, the South African health department to reject retroviral treatment for AIDS, and the Zambian government to refuse GM food from the USA in the midst of a famine.

Fear of science is not new. Benjamin Franklin was afraid to vaccinate his family against smallpox and regretted it deeply when a son died of the disease in 1736. Advocacy groups sometimes find it easier to arouse fears of science than to advocate for other goals that might actually be more fundamental to their concerns. For example, the anti-GM movement in Europe was mobilized largely by anti-capitalist, anti-corporate and anti-American activists who found it more effective than attacking corporate capitalism directly (Purdue, 2000; Schurman, 2004). These ideologies have much less support in North America and efforts to organize against GM food were much weaker. North Americans have suffered no significant ill effects from the integration of these foods into their diet, a fact that Greenpeace and other advocacy groups studiously ignore. One suspects that if GM seeds had been invented by a socialist government, they would have applauded them.

…scientists need to be careful about releasing findings on controversial issues, making sure they have been thoroughly reviewed and that the data sets are available for others to analyse

Decision-makers and the general public are best served when scientists specializing on an issue can reach a reasonable degree of consensus, making clear the limits to their knowledge. If scientists cannot do this, surely it is too much to expect politicians or journalists to do it. But efforts to define a consensus are vulnerable to attacks by conspiracy theorists that portray them as mechanisms for suppressing dissent and debate. There are always dissenters and arguing with them can be time-wasting and frustrating. In 1870, Alfred Russell Wallace allowed himself to be drawn into an extended conflict with flat earth theorist John Hampden, editor of the Truth-Seeker's Oracle and Scriptural Science Review. Their dispute involved measuring the curvature of the water on the Old Bedford Canal in England. There was a public wager, which Wallace won, followed by a lawsuit when Hampden refused to pay, a threat against Wallace's life and a prison term for Hampden. Hampden and his followers were never convinced and belief in the round earth conspiracy persists to this day (Garwood, 2008; O'Neill, 2008).

Scientists will never reach a consensus with the ‘flat-earthers' or with those who believe the earth was created in 4004 BC. Nor do they need to; all that is required is a clearly specified degree of consensus among scientists who base their conclusions on empirical data. Efforts to reach consensus on important questions have been discouraged by the influence of philosophers of science who emphasize conflicting research programmes, paradigm shifts and scientific revolutions (Franklin, 2009; Stove, 1982). While these events do occur in the history of science, they are exceptional. Most sciences, most of the time, progress with an orderly, gradual accumulation of knowledge that is recognized and accepted by specialists in the field. Opposition rooted in religious or ideological concerns is acceptable as part of the democratic political process, but it need not prevent scientists from reaching a consensus when one is justified.

The peer review process in scientific journals plays a central role in determining which research findings deserve to be incorporated in the scientific consensus on an issue. As such, it is a target for conspiracy theorists. Peer reviewers are usually anonymous, which suggests they might have something to hide. Reviewers are not in a good position to detect actual fraud; they cannot redo the experiments or data analysis. And they may reject papers that go against the conventional wisdom or political consensus in their field (Franklin, 2009). No adequate alternative to peer review has been proposed, but initiatives to make the review process more transparent might help, including making reviewers' comments and the original data sets available on the internet.

The credibility of peer review has been undermined in the recent dispute over global climate change because the reviewers are drawn from a fairly small pool of specialists, who are thought to have a political agenda. The appointment of review panels of distinguished scientists to review the body of research in the field is an excellent step for rebuilding credibility (Broder, 2010). The review panels must have full access to all the data sets and the time and expertise to conduct their own analyses if necessary, something which cannot normally be expected of volunteer reviewers for a journal. It is important that they recognize the limitations of extant scientific knowledge and give qualified specialists an opportunity to present alternative views, so long as these are based on the scientific analysis of appropriate data and not just polemical criticism. No matter how well they do their work, however, these panels are likely to be attacked by conspiracy theorists.

Scientists are not trained in public relations or issue advocacy, and there is no reason to expect them to be especially good at it. While a few scientists are gifted writers of popular books, science journalists are often better at communicating scientific findings to the public than are the researchers themselves. It may be tempting to seek exposure for new findings in the mass media, but the public is quickly disillusioned when today's newest finding is refuted by tomorrow's press release. In today's political climate, scientists need to be careful about releasing findings on controversial issues, making sure they have been thoroughly reviewed and that the data sets are available for others to analyse. Political decisions will inevitably reflect economic interests and emotional concerns that conflict with what scientists believe is best. But scientists can be more effective if they avoid falling into the trap of debating science with polemicists and clearly separate their scientific work from their political advocacy as citizens.

graphic file with name embor201084-i2.jpg

Ted Goertzel

Footnotes

The author declares that he has no conflict of interest.

References

  1. Baker P, Slevin P (2005) Bush remarks on ‘intelligent design' theory fuel debate. The Washington Post 3 Aug [Google Scholar]
  2. Broder J (2010) Scientists taking steps to defend work on climate. The New York Times 2 Mar [Google Scholar]
  3. Burgess D, Burgess M, Leasak J (2006) The MMR vaccination and autism controversy in United Kingdom 1998–2005: inevitable community outrage or a failure of risk communication? Vaccine 24: 3921–3928 [DOI] [PubMed] [Google Scholar]
  4. Coady D (2006) Conspiracy Theories: the Philosophical Debate. Farnham, UK: Ashgate [Google Scholar]
  5. Dawkins R (1976) The Selfish Gene. Oxford, UK: Oxford University Press [Google Scholar]
  6. Duesberg P (1995) Infectious Aids: Have We Been Misled? Berkeley, CA, USA: North Atlantic [Google Scholar]
  7. Enserink M (1999) The Lancet scolded over Pusztai paper. Science 286: 656. [DOI] [PubMed] [Google Scholar]
  8. Franklin J (2009) What Science Knows and How it Knows it. New York, NY, USA: Encounter [Google Scholar]
  9. Gallie WB (1964) Essentially contested concepts. In Philosophy and the Historical Understanding, Gallie WB (ed), pp 157–191. London, UK: Chatto & Windus [Google Scholar]
  10. Garwood C (2008) Flat Earth: History of an Infamous Idea. New York, NY, USA: Thomas Dunne [Google Scholar]
  11. Goertzel T (1994) Belief in conspiracy theories. Polit Psychol 15: 731–742 [Google Scholar]
  12. Goertzel T (1998) Why welfare research fails. http://crab.rutgers.edu/%7Egoertzel/fail2.html [Google Scholar]
  13. Goertzel T, Goertzel B (1995) Linus Pauling: a Life in Science and Medicine. New York, NY, USA: Basic Books [Google Scholar]
  14. Goertzel T, Goertzel B (2008) Capital punishment and homicide rates: sociological realities and econometric distortions. Crit Sociol 34: 239–254 [Google Scholar]
  15. Gold T (1989) The inertia of scientific thought. Speculations Sci Technol 12: 245–253 [Google Scholar]
  16. Hayward S (2009) Scientists behaving badly. The Weekly Standard 14 Dec [Google Scholar]
  17. Keeley B (2006) Nobody expects the Spanish Inquisition! More thoughts on conspiracy theory. In Conspiracy Theories: the Philosophical Debate, Coady D (ed), pp 116–122. Farnham, UK: Ashgate [Google Scholar]
  18. Kennedy RF (2010) Central figure in CDC vaccine cover-up absconds with $2 m. The Huffington Post [online] 11 Mar [Google Scholar]
  19. Kramer R (1998) Paranoid cognition in social systems. Pers Soc Psychol Rev 2: 251–275 [DOI] [PubMed] [Google Scholar]
  20. Kramer R, Gavreili D (2005) The perception of conspiracy: leader paranoia as adaptive cognition. In The Psychology of Leadership, Messick D, Kramer R (eds), pp 241–274. Mahwah, NJ, USA: Lawrence Erlbaum Associates [Google Scholar]
  21. Lahsen M (1999) The detection and attribution of conspiracies: the controversy over chapter 8. In Paranoia Within Reason: a Casebook on Conspiracy as Explanation, Marcus G (ed), pp 111–136. Chicago, IL, USA: University of Chicago Press [Google Scholar]
  22. Maddox (1993) Has Duesberg a right of reply? Nature 363: 109. [DOI] [PubMed] [Google Scholar]
  23. McConnachie J, Tudge R (2008) The Rough Guide to Conspiracy Theories. London, UK: Penguin [Google Scholar]
  24. Nussbaum B (1990) Good Intentions: How Big Business and the Medical Establishment are Corrupting the Fight Against Aids. New York, NY, USA: Atlantic Monthly [Google Scholar]
  25. O'Neill B (2008) Do they really think the earth is flat? BBC News Magazine [online] 4 Aug [Google Scholar]
  26. Pigden C (2006) Popper revisited, or what is wrong with conspiracy theories? In Conspiracy Theories: the Philosophical Debate, Coady D (ed), pp 17–46. Farnham, UK: Ashgate [Google Scholar]
  27. Purdue D (2000) Anti-GenetiX: the Emergence of the Anti-GM Movement. Farnham, UK: Ashgate [Google Scholar]
  28. Revkin A (2009) Hacked e-mail data prompts calls for changes in climate research. The New York Times 27 Nov [Google Scholar]
  29. Schmidt G, Amman C (2005) Dummies guide to the latest ‘hockey stick' controversy. Real Climate [online] 18 Feb [Google Scholar]
  30. Schurman R (2004) Fighting ‘Frankenfoods': Industry opportunity structures and the efficacy of the anti-biotech movement in Western Europe. Soc Probl 51: 243–268 [Google Scholar]
  31. Seitz F (1996) Major deception on global warming. Wall Street Journal 12 Jun [Google Scholar]
  32. Specter M (2009) Denialism. New York, NY, USA: Penguin [Google Scholar]
  33. Stove D (1982) Popper and After: Four Modern Irrationalists. Oxford, UK: Oxford University Press [Google Scholar]
  34. Susstein C, Vermeule A (2008) Conspiracy theories. Social Science Research Network [online] 15 Jan [Google Scholar]
  35. Wallace A (2009) An epidemic of fear. Wired Nov: 128–136 [Google Scholar]
  36. Zimring F (2006) The Great American Crime Decline. Oxford, UK: Oxford University Press [Google Scholar]

Articles from EMBO Reports are provided here courtesy of Nature Publishing Group

RESOURCES