To the Editor –
Dual-Use Research of Concern (DURC) can be defined as research, mainly in the life sciences, that has the potential to be misapplied for harmful purposes1. Key examples are the synthesis of mousepox 2, synthesis of polio virus 3, generation of the 1918 flu virus 4, gain of function studies with H5N1 in ferrets 5 and synthesis of horsepox, the viral cousin of smallpox 6. These seminal events involved the physical synthesis of a biological agent. However, the time may have come to also consider dual use risk of the development of toxic agents in silico: We recently reported the alarming results of a computational experiment performed for a biennial arms control conference where we used a generative artificial intelligence (AI) approach previously developed for drug discovery applications, and found it could easily design a range of nerve agents including VX 7,8. The experiment demonstrated the alarming speed and ease with which software - based on open-source tools and datasets from the public domain - could be used for bad purposes. Our experiment was subsequently covered widely in the media, reaching a network of scientists, experts, and lay people alike8, and its implications were recognized at the highest levels of governments within a matter of days. The level of interest was likely amplified due to the enfolding war in Ukraine, with Russia’s invasion and the threat of biochemical weapons use.
While we are a small team, the perspectives we provide are nationally diverse, span the private, academic, and government sectors, and draw on expertise from the natural and social sciences as well as more technical fields like computing and drug discovery. We believe our experience holds several important lessons which we wish to communicate to the scientific, ethics and security communities. First, the experiment is a clear and powerful example of a concrete dual-use risk concern arising from converging technologies and this should be harnessed to raise awareness of the security dimension of life science research. Second, our experience as a whole, obtained from this publication from reviewer and editorial feedback through to the many groups we have since presented to or been interviewed by taught us the importance of increasing awareness in a responsible, non-alarmist way. Third, we need to consider what these dual use findings mean for responsible science in drug discovery, and what action the community should be taking.
Responses to our article 7 have run the gamut. Some academics and government employees have requested the compound structures (this was denied); some suggested we should only use the technology for good (yes). Others asked whether the software could help identify treatments for diseases of interest to them (yes, potentially). Some felt our thought experiment was obvious; while several experts on chemical weapons accepted that they had not considered it and saw novelty. Many were concerned about the security of the data generated. There have also been questions on why we published and whether the details of the experiment should have even been published at all—in line with responses to prior biological dual-use examples. In response to this point, we believe that this new example highlights an important message, that dual use risk potential in the life sciences goes beyond the synthesis of biological agents. For governments, our thought experiment highlights the challenge of how and when to limit access to generative and machine learning software, including through export controls. For the drug design community, it will now be necessary to agree on ways to share data and models securely.
Our thought experiment has already become a ‘teachable moment for dual-use’ – a positive unintended consequence of our study. It can be drawn on as a test case for considering the risks of research involving converging technologies, in contrast to prior dual-use examples that focus on physical biological agents. It can also be used to provide dual-use risk training for those applying AI in drug discovery in the context of nerve agents and chemical weapons. Dual-use concerns in AI is already an urgent topic on the agenda for policymakers, but our results point to the need for further action in developing regulation. Our preemptive publication may lead to increased diligence around AI technologies, datasets, models, and related software for designing new molecules and the subsequent consideration of ethics and societal consequences 9. Dual-use potential of AI is of concern to all scientists, not just those in the field of drug discovery. We hope our thought experiment puts dual-use risk on the radar for a wider area without raising undue alarm and that it stimulates the search for potential solutions 8.
Acknowledgments
Cédric Invernizzi contributed to this article in his personal capacity. The views expressed in this article are those of the authors only and do not necessarily represent the position or opinion of Spiez Laboratory or the Swiss Government.
Funding
We kindly acknowledge NIH funding from R44GM122196-02A1 from NIGMS and 1R43ES031038-01 and 1R43ES033855-01 from NIEHS for our machine learning software development and applications. “Research reported in this publication was supported by the National Institute of Environmental Health Sciences of the National Institutes of Health under Award Number R43ES031038 and 1R43ES033855-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.”
Footnotes
Competing interests
F.U. and S.E. work for Collaborations Pharmaceuticals, Inc. F.L. and C.I. have no conflicts of interest.
References
- 1.Rath J, Ischi M & Perkins D Science Engineering Ethics 20, 769–790 (2014). [DOI] [PubMed] [Google Scholar]
- 2.Jackson RJ et al. J Virol 75, 1205–1210, doi: 10.1128/JVI.75.3.1205-1210.2001 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Cello J, Paul AV & Wimmer E Science 297, 1016–1018, doi: 10.1126/science.1072266 (2002). [DOI] [PubMed] [Google Scholar]
- 4.Tumpey TM et al. Science 310, 77–80, doi: 10.1126/science.1119392 (2005). [DOI] [PubMed] [Google Scholar]
- 5.Herfst S et al. Science 336, 1534–1541, doi: 10.1126/science.1213362 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Noyce RS, Lederman S & Evans DH PLoS One 13, e0188453, doi: 10.1371/journal.pone.0188453 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Urbina F, Lentzos F, Invernizzi C & Ekins S Nature Machine Intelligence 4, 189–191, doi: 10.1038/s42256-022-00465-9 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Nature Machine Intelligence 4, 313–313, doi: 10.1038/s42256-022-00484-6 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Shankar S & Zare RN Nature Machine Intelligence 4, 314–315, doi: 10.1038/s42256-022-00481-9 (2022). [DOI] [Google Scholar]
