It used to be the case that national security concerns about, and limitations on, research and publications affected mainly nuclear physicists and chemists. Biology, it seemed, was a relatively benign research field that aimed to understand life rather than end it. Not any more. The same people who worry about the spread of nuclear and chemical weapons now realize that biologists can detect and manipulate dangerous organisms, and understand how chemicals affect nerve cells. This realization, coupled with a sufficiently pessimistic imagination, could lead to the conclusion that biological researchers are a potentially dangerous group who need to be controlled.
This may be an exaggeration, but there are indications of such concerns: the USA has placed stricter controls on scientists from particular countries, and there is an increasing requirement to obtain clearance before publishing research with potential security implications. And more drastic actions may follow: US Republican Senator Richard Burr has already proposed to create an agency to control scientific information. At the heart of the debate—from which the scientific community is almost absent—is the question whether scientists can be trusted to control their own activities or whether governments should intervene. Many in positions not only of power, but also of responsibility, feel that they might be delinquent if they do not take action.
Intuitively, it would seem reasonable that some areas of virology, microbiology and neurobiology might fall under strict control, but everything else, we may think, should continue unchallenged. Yet, during the 2005 EMBO/EMBL joint conference on Science and Security, I was shocked to hear that proteomics could become a potential danger to society—the reason given was that it could help to identify targets for nerve gases. The sequencing of the 1918 Spanish flu virus also caused concerns even though the decision to publish the full genome sequence was cleared by the US National Science Advisory Board for Biosecurity. Some still feel that the publication was highly irresponsible, thus creating justification for more restrictions on scientific research. The self-imposed moratorium on recombinant DNA technology in the mid-1970s may be in danger of being revisited. What seemed a great example of scientists acting responsibly could be reinterpreted as a clever tactic to prevent draconian measures. Some believe that scientists initially noted the potential dangers of the technology and then—favouring commercial interests and their own competitive urges over societal concerns—gave it clearance. In the light of the ongoing debate on genetically modified organisms, it is unlikely that this definition of safety would be accepted today.
Scientists should never take academic freedom for granted; we should all join the debate on science and security. If we remain absent from the committees that have influence or control over such matters, unexpected concerns about topics we would view as innocent—proteomics, for example—could soon lead to restrictions that would hinder or even halt our research. These could stop the next foreign postdoc in your lab from obtaining a visa, block publication of your next paper, or stop you from purchasing equipment deemed to be classified technology. All of this might be avoided if scientists engage in the discussions on what most will agree is a threat only in the minds of the very imaginative. We need to find the right balance between moving forwards—towards understanding life and providing new therapeutics—and holding back for the sake of security.
It is interesting that, in French, securité has two meanings: protection against attack, and safety. It is time that we take both concepts more seriously. In the laboratory, safety committees concern themselves primarily with the welfare of workers; some outsiders now complain that when it comes to reviewing and assessing the potential risks of an experiment or project, these committees' performance generally does not give the impression that self-monitoring is working. This contrasts with ethical committees in clinical research, in which all parties accept that anything at the edge of ethical, safety or even administrative standards must be examined by an independent panel before a decision is made whether to proceed. I suspect that the day will come when we scientists will have to put a similar system in place for our work, with impartial groups balancing the benefits and risks of proposed research. If we can convince society that such a system of self-control is sufficient, then those who want to install legislative control on research will find it more difficult to push for stricter oversight.
In the meantime, we should all do more to sensitize everyone in our laboratories to the growing concerns about where our research might lead. This can be as simple as an internal audit—a discussion at a laboratory meeting would be a start—to identify which experiments could be interpreted as a risk to public safety. I am sure that few such topics would emerge. But if we first put our own houses in order, we may avoid more crude controls, which would ultimately hinder biological research.
