Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
editorial
. 2022 Jun 24;19(3):363–370. doi: 10.1007/s11673-022-10197-5

Ethical, Legal and Social Implications of Emerging Technology (ELSIET) Symposium

Evie Kendal 1,
PMCID: PMC9243845  PMID: 35749026

Establishing ethical guidelines for the development and release of emerging technologies involves many practical challenges. Traditional methods of evaluating relevant ethical dimensions, such as Beauchamp and Childress’s (2001) Principlist framework, are often not fit for purpose: after all, how can one give autonomous, informed consent to the use of novel technologies whose effects are unknown? How can cost-benefit analyses be conducted in cases where there is a high degree of scientific uncertainty about the severity and likelihood of different risks, and potential benefits have not yet been demonstrated? Nevertheless, it is necessary to promote consideration of the ethical, legal, and social implications of emerging technologies to avoid them being released into what some commentators label a moral, policy, and/or legal vacuum (Moor 2005; Edwards 1991). Consequently, various methods for approaching the ethics of emerging technologies have arisen over the last few decades, some of the more common of which are summarized below.

Precautionary Approaches and the Precautionary Principle

Moor (2005) claims that the rapid emergence of new technologies “should give us a sense of urgency in thinking about the ethical (including social) implications” of these technologies (111), noting that when technological developments have significant social impact this is when “technological revolution occurs” (112). He notes, however, that these technological revolutions “do not arrive fully mature,” and their unpredictability yields many ethical concerns (112). For this reason, Wolff (2014) advocates for a precautionary approach to regulating emerging technologies with unknown risks, noting historical excitement over the benefits of new technologies that were far outweighed by harms that materialized later: asbestos used to fireproof buildings that led to high costs for removal and loss of human life, and chlorofluorocarbons used in refrigerants that caused significant damage to the ozone layer being his main examples (S27). He claims a precautionary approach to any new technology would always ask four questions: 1) is the technology known to have “intolerable risks”, 2) does it yield substantial benefits, 3) do these benefits “solve important problems”, and 4) could these problems be “solved in some other, less risky way” (S28)? According to this approach, unless the answers to questions 1 and 4 are no, and 2 and 3 are yes, technological development should not be permitted to proceed.

The Precautionary Principle (PP) formalizes a precautionary approach to emerging technologies, particularly those involving potential harms to human health and the environment (Hansson 2020). Bouchaut and Asveld (2021) note the PP originated in German domestic law in the 1970s before being adopted as the dominant approach to regulating new biotechnologies throughout Europe from the 1990s onward. One of the main regulatory areas where the PP is discussed is in European legislation surrounding genetically-modified foods, with many arguing this precedent will likely impact the treatment of more novel gene-editing techniques, such as CRISPR-Cas9 (Bouchaut and Asveld 2021). At its core, the PP translates any potential for harm in the face of scientific uncertainty into a positive duty for stakeholders to act to prevent or mitigate this harm (Guida 2021). Incorporating risk assessment, management, and communication, for Hansson (2020) the PP represents a “pattern of thought, namely that protective measures against a potential danger can be justified even if it not known for sure that the danger exists” (250). It is for this reason that use of the PP is often criticized for unreasonably blocking technological developments, including those that could yield significant health benefits for the population (Hester et al. 2015). However, in a study of nine jurisdictions with different levels of regulatory restrictions on biotechnological developments, Gouvea et al. (2012) found research productivity was not enhanced through the “absence of structural or ethical impediments,” but rather the presence of transparent guidelines (562):

While one might argue that an environment that lacks all constraints (including ethical barriers) may allow for rapid product development through the provision of an environment where anything goes, the opposite is found to be the case. It is likely that the presence of clearly defined rules, higher levels of disclosure, greater levels of trust, and reduced costs (associated to lower levels of corruption) results in the appropriate set of ethical rules and guidelines providing the best outcomes for development of commercial products (562–563).

This study drew comparisons between the European model applying the PP to advances in nanotechnology and the U.S. wait-and-see approach, which treats these new products the same as their more traditional counterparts (554). Applying Hester et al.’s (2015) logic to this situation, the European approach imposes restrictions and a requirement to avoid potential harms, while the United States takes a more conventional legal approach that would merely award damages if a harm is sustained.

Hansson (2020) claims “no other safety principle has been so vehemently contested” as the PP, with many arguing it “stifles innovation by imposing unreasonable demands on the safety of new technologies” (245). However, applying precautionary measures to novel situations with uncertain risks has proven essential in the global response to the COVID-19 crisis, where preventive actions had to be put in place while scientific data were still being collected (Guida 2021). For Hansson (2020), and many others, what the PP lacks is a method of adjusting to new knowledge as it becomes available. Wareham and Nardini (2015) similarly note that in its strongest formulation, the PP might ban entire research projects going ahead, due to risk, however small, of a significant harm. Mittelstadt, Stahl, and Fairweather (2015) also note that if the purpose of the PP is to avoid harm, and preventing scientific progress and the development of new technologies can be considered a harm, this results in a precautionary paradox where the principle “would instruct us to refrain from implementing itself” (1034). In all the above cases, the authors advocate instead for an iterative process that allows progressive steps of experimentation, proportional to their associated risk, with regulations adapting as scientific uncertainty “gives way to new scientific knowledge” (Hansson 2020, 253). This relates to the next approach to be covered here: design ethics.

Design Ethics Versus the Collingridge Dilemma

Design ethics takes into account the social and ethical dimensions of the context in which a product is designed and will be used. One of the more common methods is “value-sensitive design,” which tries to identify relevant human values during technology research and development phases to ensure they are “promoted and respected by the design” (Umbrello and van de Poel 2021, 283). These might include respect for privacy, environmental sustainability, accountability, and many other values that are at stake in the interaction between people, technology, and the environment (Friedman et al. 2021). When it comes to new technologies with unknown risks, this model would support adopting preventive measures to mitigate harm, but as Bouchaut and Asveld (2021) claim, could also allow for “controlled learning” experiments, where step-by-step potential risks can be explored as the technology develops. These authors refer to this process as responsible learning, noting it would also require a degree of regulatory flexibility that the PP does not currently support. In this way, it is a method of proceeding in the face of uncertainty, reflecting on the ethical and safety concerns of each new stage in development before a technology is fully realized. One such model is the “Safe-by-Design” approach, which these authors note is “associated with learning processes that aim for designing specifically for the notion of safety by iteratively integrating knowledge about the adverse effects of materials” (Bouchaut and Asveld 2021). Other models include “participative design,” where the views and values of end-users are sought during the design phase, so designers can incorporate knowledge of the consequences of new technologies on those impacted by them (Mumford 1993). The acronym ETHICS was used in the formulation of this approach when considering the impact of new computing technologies on workers’ experiences, referring to Effective Technical and Human Implementation of Computer-based Systems (Mumford 1993). At its core, the purpose of participative design is to make new technologies fit-for-purpose and people-friendly.

While intuitively a system that progressively learns about risks as they manifest and adapts accordingly may seem superior to one that might ban an emerging technology from the outset due to unknown risks, one problem with this iterative approach is what has been dubbed the Collingridge dilemma. Mittelstadt, Stahl, and Fairweather (2015) explain this dilemma as follows:

it is impossible to know with certainty the consequences of an emerging technology at an early stage when it would be comparatively simple to change the technology’s trajectory. Once the technology is more established and it becomes clear what its social and ethical consequences are going to be, it becomes increasingly difficult to affect its outcomes and social context. (1028)

So, while a “Safe-by-Design” approach might be able to pivot easily if issues are discovered early in the process, once the technology has progressed to a certain stage, it is too late to intervene. To prevent the creation and release of potentially dangerous technologies requires a more speculative approach, as is present in the next three models to be discussed.

Technology Assessment (TA) to Ethical Technology Assessment (eTA)

Alongside the PP, technology assessment (TA) is one of the best-known methods of dealing with uncertainty (Mittelstadt, Stahl, and Fairweather 2015). Grunwald (2020) notes that because TA is not focused on technologies that actually exist yet, it is a method that “creates and assesses prospective knowledge about the future consequences of technology,” through evaluating and scrutinizing “ideas, designs, plans, or visions for future technology” (Grunwald 2020, 97; Grunwald 2019). He further notes that participatory versions of this speculative evaluative process are less about imagining technologies per se, and more about envisaging future technologies as situated in a specific “societal environment” (Grunwald 2019). The inputs for analysis are thus “models, narratives, roadmaps, visions, scenarios, prototypes” etc. (Grunwald 2020, 99). Mittelstadt, Stahl and Fairweather (2015) note traditional TA arose in response to “undesirable or unintentional side effects of emerging technologies” with a primary focus on considering the impact of technology on “the environment, industry and society” (1035). Its goal is to foster responsible regulation to maximize benefit and prevent harm caused by advances in technology.

While TA has been highly influential, particularly for establishing environmental impact assessments and other forms of risk analysis, Palm and Hansson (2006) claim ethical and social dimensions of emerging technologies have often been neglected (546). They propose the ethical technology assessment (eTA) approach that adjusts development in line with ethical concerns and guides decision-making (551). Their ethical “checklist” contains nine items that, if implicated in an emerging technology, indicate an eTA should be conducted. Examples include if the proposed technology might be expected to affect concepts of “privacy” or “gender minorities and justice” (551). Brey (2012) states the purpose of eTA is to “provide indicators of negative ethical implications at an early stage of technological development … by confronting projected features of the technology or projected social consequences with ethical concepts and principles” (3–4). Palm and Hansson (2006) note that current obstacles to this process include fear of the unknown, assumptions about the self-regulation of technological development, and citizens feeling ill-equipped to engage in discussions of technologies that are becoming increasingly complex (547). They describe the result in terms of W.F. Ogburn’s concept of “cultural lag,” where technology, as an instance of material culture, is now released into society before “non-material culture has stabilized its response to it” (547). In other words, social, ethical, legal, religious, and cultural systems have not yet grappled with the implications of technologies before they are unleased on society. Some of these challenges are met by scenario-based approaches to emerging technologies, as demonstrated below.

Scenario Approaches

While many scenario-based approaches to emerging technology ethics overlap methodologically with eTA, there are some features that are worth discussing separately. Brey’s (2012) account of the techno-ethical scenario approach describes it as ethical assessment that helps policymakers “anticipate ethical controversies regarding emerging technologies” through analyzing hypothetical scenarios (4). He notes a unique feature of the method is that it not only tries to predict what moral issues will arise with the advent of new technologies but also how those very technologies will impact morality and “the way we interpret moral values” (4). Boenink, Swierstra, and Stemerding’s (2010) framework breaks this process up into three distinct steps: 1) “Sketching the moral landscape,” which provides a baseline narrative from which the introduction of the new technology can be compared; 2) “Generating potential moral controversies” using “New and Emerging Science and Technology” (NEST) ethics, with the aim of predicting realistic ethical arguments and issues regarding emerging technologies; and 3) “Constructing closure by judging plausibility of resolutions,” where arguments and counterarguments are considered in the light of the most likely resolution to the issues raised in step 2 (11–13). The process can draw analogies to existing or historical examples of technological change, and the ethical consequences involved, or construct specific controversies and “alternative futures” (14). The most important step to consider here is the second, of which Brey (2012) states:

The NEST-ethics approach performs three tasks. First, it identifies promises and expectations concerning a new technology. Second, it identifies critical objections that may be raised against these promises, for example regarding efficiency and effectiveness, as well as many conventionally ethical objections, regarding rights, harms and obligations, just distribution, the good life, and others. Third, it identifies chains of arguments and counter-arguments regarding the positive and negative aspects of the technology, which can be used to anticipate how the moral debate on the new technology may develop. (4–5)

In this way, scenario analysis can consider how technology and ethics change in tandem when new technologies emerge.

Socio-technical scenario approaches are similar to the techno-ethical approach outlined above; however, according to Schick (2019), they owe their origins to utopian studies and traditional philosophical thought experimentation (261). Claiming they are now used “as a form of moral foresight; an attempt to keep the ethical discourse ahead of the technological curve,” Schick (2019) suggests the goal of socio-technical speculation is to “guide society toward morally sound decisions regarding emerging technologies” (261). Thus, the scenarios being discussed are deeply embedded in hypothetical future societies. The Collingridge dilemma is also noted as a potential pitfall for this method, which the final technique covered here tries to avoid through engaging anticipatory models of ethics and governance.

Anticipatory Technology Ethics/Governance

It is well recognized that governing emerging technologies is difficult due to uncertainty regarding their impact on human health, the environment, and society. Hester et al. (2015) suggest one method of addressing this is to develop regulatory systems that rely on “anticipatory ethics and governance, future-oriented responsibility, upstream public engagement and theories of justice” (124). These would be forward-looking and flexible, allowing cautious development of technology instead of enforcing bans or merely being used to impute responsibility for harm after the fact, as is often seen in current legal systems. Noting that existing ethico-legal approaches “tend to be reactive and static,” these authors promote a “future-care oriented responsible innovation” that protects public trust in science and technology (125, 131). Brey (2012) notes that most anticipatory ethics frameworks apply one of two approaches: restricting discussion to “generic qualities” of technology and their likely ethical ramifications or speculating on possible future devices and their impact on society (2–3). The latter relies on future studies and forecasting techniques to allow ethical reflection on technologies that are yet to materialize. When discussing the European Commission’s Ethical Issues of Emerging ICT Applications (ETICA) approach, Brey (2012) claims multiple such techniques were used in the aggregate in an attempt to circumvent any individual weaknesses in methodology (5). However, his own theory of “anticipatory technology ethics” (ATE) tries to overcome the limited capability of forecasting by separating ethical evaluation into three levels: “the technology, artifact and application level” (7). Technologies are considered collections of techniques with a common function, and thus the technology level of ATE just focuses on what the technology is, and the general ethical concerns arising from this. At the artifact level, the “functional artifacts, systems and procedures” developed from the technology of interest are ethically evaluated (8). Brey (2012) provides the example of nuclear technology yielding such artifacts as nuclear reactors, x-ray imaging, and bombs. The artifact level of ATE thus considers what a technology is likely to bring into being and the relevant consequences of this. The application level then focuses on the use and purpose of artifacts in practice. The latter two levels of ATE are included in Brey’s “responsibility assignment stage,” where moral actors are assigned responsibility for the impact of emerging technologies (12).

Other variations on ATE can be found in Nestor and Wilson’s (2020) anticipatory practical ethics methodology incorporating stakeholder analysis and intuitionism, which allows for ethical consideration of not just future technologies but also future stakeholders, for example, children produced using CRISPR technology (134). These authors distinguish between anticipatory ethics, where ethical theories are applied to novel situations impacting various stakeholders with the goal of providing policy recommendations, and anticipatory governance, which develops policies in line with predictions regarding human behaviour. They claim the two can be combined to produce “future-oriented legal analysis based on theories of justice for rapidly emerging technologies” (134). They suggest such an analysis should include 1) specific ethical principles, including common sense intuitions; 2) “intermediate” principles, such as harm minimisation, utility, justice, etc.; 3) normative ethical theories, such as consequentialism, deontology, social contract theory, etc.; 4) relevant professional ethics codes, e.g. medical ethics; and 5) “the possibility of emergent ethical principles arising due to the uniqueness and rapid pace of development of new technologies” (137). For Nestor and Wilson (2020), these are all considered legitimate sources for ethical decision-making and can be used in conjunction with stakeholder analysis to produce ethical guidance and policy recommendations (139).

Anticipatory ethical systems are also subject to criticisms, including that because they speculate on future technologies they might waste time conducting analyses on things that never come to pass. Schick (2019) also claims it is often unclear what constitutes success in anticipatory ethics, as the goal of settling all ethical concerns and establishing appropriate regulatory systems before a technology is released may be unrealistic (265). Further, in their attempt to pre-empt future applications of new technologies, Schick (2019) claims speculative ethical models may miss crucial stages in the process, as demonstrated by the example of genetic engineering:

the mainstream bioethics discourse on human genetic engineering (i.e. primarily in the US and the UK) was not indexed to the current state of science or slightly ahead of it, but instead took up questions entangled with more distant anticipated future developments. Keeping the discourse well ahead of the curve of emerging biomedical technologies probably generated interesting discussions, but it may also have contributed to the weakness of the consensus-based norms that were thought to be keeping human germline genetic engineering in check. In effect, the forward-looking discourse subjected them to what might be called “anticipatory obsolescence” by asking whether to maintain a distinction between somatic and germline therapies—long before there was a technique up to the task of altering the genome of a human embryo with sufficient efficacy to begin considering preclinical human embryonic interventions. (264)

Once human embryonic gene editing became possible, Schick (2019) claims “the newly urgent question of whether germline interventions were ethically permissible was no longer where the discussion was centered” as speculations regarding human enhancement had started to dominant bioethical debate on the subject (264). Schick (2019) continues: “[i]n retrospect, it seems almost inevitable that once germline engineering was accomplished, the ‘old’ question of whether it should be undertaken at all would suddenly become obsolete” (264). Thus, there is a risk that by focusing too much on future applications, ethicists will miss the opportunity to intervene in foundational stages of technological revolution.

While anticipatory ethics and governance systems are becoming a popular way of dealing with the uncertain risks of emerging technologies, Mittelstadt, Stahl, and Fairweather (2015) claim such prophetic decision-making aids “cannot be given the same epistemic status as facts and norms concerning existing phenomena” (1044). They note some technologies are so novel even the most basic risk data is unavailable when decisions need to be made about their development. This applies to several of the emerging technologies under discussion in this symposium issue.

The Ethical, Legal and Social Implications of Emerging Technologies (ELSIET) Symposium

The Ethical, Legal and Social Implications of Emerging Technologies (ELSIET) research group was established with support from Deakin University’s Science and Society Network in 2018. Over the next two years the group recruited forty members from eighteen academic institutions in six different countries and hosted three seminars focused on the ethics of emerging technologies. This special issue highlights some of the work arising from these meetings. The purpose of the group is to foster collaborations among specialists working in emerging technologies, including ethicists, scientists, lawyers, and artists. The group went on hiatus at the beginning of the COVID-19 pandemic but has resumed regular activities in 2022 under the auspices of the Iverson Health Innovation Research Institute, Swinburne University of Technology. In 2019, ELSIET was awarded a Brocher Foundation symposium grant in conjunction with members of the University of Melbourne’s School of Population and Global Health, Western Australia’s Department of Health, and the Gen(e)quality Network. Originally planned for 2020, the symposium was rescheduled to May 2022, with an online version occurring in May 2021.1

The papers included in this symposium issue address emerging technologies and situations that would trigger Palm and Hansson’s (2006) ethical checklist, as they pertain to “dissemination and use of information” and “privacy,” particularly for genetic information, “human reproduction” in the form of artificial womb technology, and “impact on human values,” with particular focus on the potential commodification of human DNA. Each paper also engages with one or more of the practices outlined above for ethically evaluating emerging technologies.

The collection begins with Wise and Borry’s (2022) discussion of the ethical issues surrounding the use of CRISPR-based technologies for eliminating Anopheles gambiae mosquitoes, the dominant vector for malaria throughout sub-Saharan Africa. These authors consider ethical debates regarding whether the species possesses any intrinsic worth, moral status, or instrumental value in terms of increasing biodiversity. The significance of the CRISPR-based technologies under debate relate to the new-found ability to modify the genes and eventually eradicate this entire species of mosquitoes, rather than just eliminating some of them. The competing demands of minimizing human suffering and avoiding unintended side effects to natural ecosystems are recognized throughout. This paper considers the utility of the PP in addressing these ethical issues, as well as the environmental and risk assessment elements intrinsic to TA.

The second paper, by Ferreira (2022), considers the ethical implications of artificial womb technologies through the lens of utopian fiction, namely Helen Sedgwick’s The Growing Season (2017) and Rebecca Ann Smith’s Baby X (2016). Viewed as feminist rewritings of Aldous Huxley’s dystopian classic Brave New World (1932), these texts consider the emancipatory potential of ectogenesis for women. For Palm and Hansson (2006), advances in reproductive technologies represent the site of some of “the most blatant clashes” between “social norms and moral values” in society, influencing perceptions of family and human reproduction (553). The use of utopian fiction to guide ethical evaluation aligns with various elements of the socio-technical scenario approach to emerging technologies.

The third paper, by Koplin, Skeggs, and Gyngell (2022), similarly falls into one of Palm and Hansson’s (2006) key criteria for eTA, as these authors propose allowing a commercial market for the sale and purchase of human DNA. For Palm and Hansson (2006), such a proposal would require ethical evaluation to prevent the “negative consequences of commodification” leading to “reduced respect for human personhood” (554–555). Koplin, Skeggs, and Gyngell (2022) anticipate these objections when outlining how an ethical market in human DNA might be created, considering related concerns regarding exploitation and undue inducement. This analysis includes various stages of the techno-ethical scenario approach, particularly the sketching of the current moral landscape of gene banking, and exploration of arguments and counterarguments to the hypotheticals presented.

The fourth paper, by Delgado et al. (2022), provides a scoping review of academic literature focused on biases in artificial intelligence algorithms for predicting COVID-19 risk, triaging, and contact tracing. These authors identify issues with data collection, management, and privacy, as well as a lack of regulation for the use of these programmes as key practical and ethical concerns. With their focus on the impacts of these biases and the social determinants of health on various reported health disparities, these authors highlight a role for Brey’s (2012) ATE framework, which considers the social application of emerging technologies, and Hester et al.’s (2015) anticipatory ethics and governance.

The final paper in the collection is Benston’s (2022) protocol for developing policy recommendations regarding heritable gene editing. In this, potential benefits and harms are identified and evaluated in a way that guides the proposed study design. The focus on anticipatory ethics and governance incorporates several elements present in Nestor and Wilson’s (2020) anticipatory practical ethics methodology, particularly Benston’s focus on detailed stakeholder analysis.

Technological developments involve uncertainty and carry with them the potential for both significant benefit and harm. While we cannot know the future, various methods for ethically evaluating and regulating emerging technologies have arisen that aim to promote discovery while protecting safety. The more revolutionary a new technology is, the greater its potential impact on society and thus the ethical issues it might generate. The articles in this symposium issue all take a proactive, rather than reactive, approach to discussing such issues in advance of these technologies being fully realized in society.

Funding source

Deakin University Science and Society Network seed grant

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Beauchamp TL, Childress JF. Principles of biomedical ethics. 5. Oxford: Oxford University Press; 2001. [Google Scholar]
  2. Benston, S. 2022. Walking a fine germline: Synthesizing public opinion and legal precedent to develop policy recommendations for heritable gene-editing. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10186-8 [DOI] [PubMed]
  3. Boenink M, Swierstra T, Stemerding D. Anticipating the interaction between technology and morality: A scenario study of experimenting with humans in bionanotechnology. Studies in Ethics, Law, and Technology. 2010;4(2):1–41. doi: 10.2202/1941-6008.1098. [DOI] [Google Scholar]
  4. Bouchaut B, Asveld L. Responsible learning about risks arising from emerging biotechnologies. Science and Engineering Ethics. 2021;27(2):22. doi: 10.1007/s11948-021-00300-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brey PAE. Anticipatory ethics for emerging technologies. NanoEthics. 2012;6(1):1–13. doi: 10.1007/s11569-012-0141-7. [DOI] [Google Scholar]
  6. Delgado et al. 2022. Bias in algorithms of AI systems developed for COVID-19: A scoping review. Journal of Bioethical Inquiry 19(3) [DOI] [PMC free article] [PubMed]
  7. Edwards J. New conceptions: Biosocial innovations and the family. Journal of Marriage and Family. 1991;53(2):349–360. doi: 10.2307/352904. [DOI] [Google Scholar]
  8. Ferreira, A. 2022. The (un)ethical womb: The promises and perils of artificial gestation. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10184-w [DOI] [PubMed]
  9. Friedman B, Harbers M, Hendry D, van den Hoven J, Jonker C, Logler N. Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics and Information Technology. 2021;23(1):5–16. doi: 10.1007/s10676-021-09586-y. [DOI] [Google Scholar]
  10. Gouvea R, Linton J, Montoya M, Walsh S. Emerging technologies and ethics: A race-to-the-bottom or the top? Journal of Business Ethics. 2012;109(4):553–567. doi: 10.1007/s10551-012-1430-3. [DOI] [Google Scholar]
  11. Grunwald A. The objects of technology assessment. Hermeneutic extension of consequentialist reasoning. Journal of Responsible Innovation. 2020;7(1):96–112. doi: 10.1080/23299460.2019.1647086. [DOI] [Google Scholar]
  12. Grunwald A. Technology assessment in practice and theory. Abingdon, Oxon: Routledge; 2019. [Google Scholar]
  13. Guida A. The precautionary principle and genetically modified organisms: A bone of contention between European institutions and member states. Journal of Law and the Biosciences. 2021;8(1):lsab012. doi: 10.1093/jlb/lsab012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hansson SO. How extreme is the precautionary principle? NanoEthics. 2020;14(3):245–257. doi: 10.1007/s11569-020-00373-5. [DOI] [Google Scholar]
  15. Hester K, Mullins M, Murphy F, Tofail S. Anticipatory ethics and governance (AEG): Towards a future care orientation around nanotechnology. NanoEthics. 2015;9(2):123–136. doi: 10.1007/s11569-015-0229-y. [DOI] [Google Scholar]
  16. Koplin, J., J. Skeggs, and C. Gyngell. 2022. Ethics of buying DNA. Journal of Bioethical Inquiry 19(3): [DOI] [PMC free article] [PubMed]
  17. Mittelstadt, B. D., Stahl, B. C., and Fairweather, N. B. 2015. How to shape a better future? Epistemic difficulties for ethical assessment and anticipatory governance of emerging technologies. Ethical Theory and Moral Practice 18: 1027–47. 10.1007/s10677-015-9582-8
  18. Moor JH. Why we need better ethics for emerging technologies. Ethics and Information Technology. 2005;7(3):111–119. doi: 10.1007/s10676-006-0008-0. [DOI] [Google Scholar]
  19. Mumford E. The participation of users in systems design: An account of the origin, evolution, and use of the ETHICS method. Florida: CRC Press; 1993. [Google Scholar]
  20. Nestor MW, Wilson RL. Beyond Mendelian genetics: Anticipatory biomedical ethics and policy implications for the use of CRISPR together with gene drive in humans. Journal of Bioethical Inquiry. 2020;17(1):133–144. doi: 10.1007/s11673-019-09957-7. [DOI] [PubMed] [Google Scholar]
  21. Palm E, Hansson SO. The case for ethical technology assessment (eTA) Technological Forecasting and Social Change. 2006;73(5):543–558. doi: 10.1016/j.techfore.2005.06.002. [DOI] [Google Scholar]
  22. Schick A. What counts as “success” in speculative and anticipatory ethics? Lessons from the advent of germline gene editing. NanoEthics. 2019;13(3):261–267. doi: 10.1007/s11569-019-00350-7. [DOI] [Google Scholar]
  23. Umbrello S, van de Poel I. Mapping value sensitive design onto AI for social good principles. AI and Ethics. 2021;1(3):283–296. doi: 10.1007/s43681-021-00038-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Wareham C, Nardini C. Policy on synthetic biology: Deliberation, probability, and the precautionary paradox. Bioethics. 2015;29(2):118–25. doi: 10.1111/bioe.12068. [DOI] [PubMed] [Google Scholar]
  25. Wise, I.J. and P. Borry. 2022. An ethical overview of the CRISPR-based elimination of Anopheles gambiae to combat malaria. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10172-0 [DOI] [PMC free article] [PubMed]
  26. Wolff J. The precautionary attitude: Asking preliminary questions. Hastings Center Report. 2014;44(S5):S27–S28. doi: 10.1002/hast.393. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Bioethical Inquiry are provided here courtesy of Nature Publishing Group

RESOURCES