Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Aug 29.
Published in final edited form as: Nat Mach Intell. 2024 Nov 29;6(12):1409–1410. doi: 10.1038/s42256-024-00947-y

A plea for caution and guidance about using AI in genomics

Mohammad Hosseini 1,, Christopher R Donohue 2
PMCID: PMC12393819  NIHMSID: NIHMS2081287  PMID: 40894118

The incorporation of artificial intelligence (AI) into genetics and genomics research can enable research that would have been otherwise impossible. However, these benefits must be considered together with the potential risks to humans, other sentient beings, and the environment. Genetic and genomic advances require much trial and error to succeed; this is ethically fraught when the consequences are unknown and the moral status of created or modified tissues or organisms is unclear, or possibly comparable to conscious beings. We argue that it is urgent to expand the ethical discourse on the use of AI in genomics research and to develop appropriate guidance.

Genetics and genomics research has enabled us to answer questions about historical events and keeps adding considerable depth to what we know about the world and ourselves. For example, it has determined when and how certain animals and plants were domesticated, how diseases spread, how viruses mutate, and thanks to CRISPR, has led to remarkable advances in the understanding and treatment of rare genetic conditions. However, these advances also intensify long-standing tensions between biomedical innovations and research ethics norms, especially with new AI developments.

A dramatic increase in the pace and scalability of genetic techniques combined with the potential of AI, may catapult researchers’ abilities to new heights. For example, AI supports advancing research by facilitating genome characterization, supporting gene expression and other multi-omics measurements1, genome interpretations2, and processing large and complex genomic datasets3.

But should genetics and genomics research capitalize on AI technology, which can make modifications and interventions more accessible, for any use? Let us take research on neanderthals — the closest relatives to modern Homo sapiens — as an example. Neanderthal DNA and gene-editing methods could potentially be used to recreate a neanderthal, or aspects of their genetics and physiology4. Researchers have already used CRISPR editing techniques to investigate neanderthal genes of interest and created brain-like tissues5. Future work may go further to enhance or develop complex behavioural traits (such as intelligence) that are prohibited when using human participants or some animal models. Such efforts may not succeed soon (and the required trial and error could harm participants), but everything we know about genomics so far underscores that if something is possible, it probably can be done quicker and for less money and effort with AI.

Consequently, many researchers are increasingly optimistic about the part that genetic modifications could play to address environmental challenges and ensure human survival6. AI could be very useful in such efforts owing to the scale and computational requirements of data analysis in gene editing. Especially when using CRISPR, AI can model genetic modifications, create the required synthetic data for trials, predict the outcomes of gene edits, and simulate the expression of edited genes. However, given the risks and unforeseen consequences, our social responsibility towards communities (that will be affected by such research) calls for a more cautious and participatory approach.

Thus far, debates about the ethics of using AI in research have mostly focused on sources of training data, attribution of credit and responsibilities, bias, accuracy, research misconduct, reproducibility and verifiability issues7. We argue that the scope of discussions about ethics of using AI in research should be expanded to enable holistic reflections about specific considerations that should accompany the rapidly advancing fields (such as genetics and genomics) in their use of AI. Our rationale is that AI facilitates new types of research and enhances researchers’ abilities, which can increase risks and potential harms if left unguided.

Considering our earlier mention of neanderthals, let us assume that AI will provide the final push to resurrect the species (like what AlphaFold did to address the protein folding problem7). Then we will have to deliberate complicated moral questions including: could and should we designate resurrected hominids special moral and legal rights (such as dignity or freedom) that only humans enjoy? If yes, then they should be allowed to roam around as they did in their natural habitat and be granted affordances to flourish. Furthermore, they should be protected under the same norms and regulations that apply to human participant research such as the Declaration of Helsinki. If no, we should consider moral implications and precedent — that resurrected sentient beings with (some) similar dispositions, and human-like abilities to suffer have been reintroduced merely as research deliverables, research participants or return on investments.

The same applies to recreating other species, including mammoths, which has raised a range of different ethical issues8. AI’s ability to increase the scalability and extremity of modifications is also useful for organoid modifications using CRISPR. Ethicists have debated whether organoids might be ‘persons’ (due to their potential cerebral and neurological functions) and worthy of ethical consideration or legal protections9. These are collections of cells and tissues that/who may or may not be persons, and beings who might never have existed without genomics, CRISPR and AI — pushing us to deliberate duties to future sentient beings.

Just as reproductive cloning was regulated (for example, restrictions of federal funds for cloning research enacted through the Human Cloning Prohibition Act of 1997)10, and calls were made for a moratorium of germ-line gene editing, AI-assisted genetic modifications could become subject to additional guidance. Reasons for developing such guidance include the potential for abusing or harming possibly sentient beings (neanderthals or organoids), a climate catastrophe (resurrecting invasive plants that could distort ecosystems’ balance), or biosecurity risks (such as accelerated development of dangerous pathogens and bioweapons).

The moral quandaries discussed here may seem like science fiction, but they call attention to the pressing need for further guidance as well as proactive and anticipatory approaches about the ethics of exploratory research and AI use in genomics. On present course, we will quickly find ourselves in the same situation as in the early 1990s, when numerous ethicists underscored the profound risks around eugenics, genetic privacy and consent, genome modification and genetic determinism, in the context of the Human Genome Project, only to be largely ignored. If the history of genetic ethics tells us anything, it is that ethical ‘worst-case scenarios’ do not disappear but may become reality as technology advances — the long-standing debates around genetic engineering being the illustrative case.

Footnotes

Competing interests

The authors declare no conflicting interests.

References

RESOURCES