Table 3.
Critical key ethical approaches that were raised in the present scoping review and their description in terms of AI research ethics.
Key ethical approaches | Description in terms of AI research ethics |
---|---|
Kantian-inspired model | The Kantian approach demands that researchers act responsibly during research (Jacobson et al., 2020). The same procedure should be executed to ensure responsible AI. Ex: AI developers must ensure that their system is adequate and will not cause harm for society. Researchers must responsibly use AI systems during their projects. |
Utilitarianism | The utilitarianism approach focuses on consequences and the best outcome for most people. It is invoked in the dilemma of using machine learning algorithms to help progress science and maintain participants' privacy (Jacobson et al., 2020). Ex: AI systems should serve the wellbeing of participants and other individuals over their usage for scientific progress. |
Principlism | Principlism is an approach that underlines principles such as autonomy, beneficence, non-maleficence, and justice invoked in issues raised while developing and using machine learning (Jacobson et al., 2020). |
Autonomy | Participants' autonomy suggests they can consent to their own will when participating in a research project using AI (Grote, 2021). Ex: Many concerns are raised about the eventuality that AI becomes fully autonomous, which takes away our control over them (Aicardi et al., 2020). Some may even say they should be granted moral autonomy (Farisco et al., 2020). Although, for now, AI mainly relies on humans, whether the users, employers, or programs which then brings up the notion of responsibility (Chassang et al., 2021). While it may not be autonomous, its purpose is to assist humans, which could negatively impact our autonomy (McCradden et al., 2020c). |
Beneficence | AI is more efficient in specific tasks than humans, bringing better results for those involved (Grote, 2021). Ex: One of AI's benefits is that it can generate more precise and accurate results (Ienca and Ignatiadis, 2020). AI can also search data more efficiently and make predictions (Andreotta et al., 2021; Grote, 2021). Furthermore, robots can assist humans in alleviating them from specific tasks (Battistuzzi et al., 2021). |
Justice | AI's use should be done in a way that does not put people at a disadvantage (Nebeker et al., 2019). Ex: Data bias can result from the under-representation of minority groups which may lead to algorithmic discrimination disadvantaging the groups in question in receiving the proper treatment of care (Ienca and Ignatiadis, 2020; Jacobson et al., 2020; Grote, 2021; Li et al., 2021). |
Non-maleficence | AI must distinguish right from wrong to ensure non-maleficence (Farisco et al., 2020). Ex: Robots should not cause harm (Stahl and Coeckelbergh, 2016). |
Precautionary principle | The precautionary principle in AI may serve as a guiding framework to encourage responsible AI research and development, prioritizing the protection of individuals, society, and the environment from potential negative impacts that may arise from AI systems (Chassang et al., 2021). Ex: AI developers should consider societal needs and ensure that potential risks will be taken care of at the beginning of product conception. Governments should put in place regulations to prevent future harm with AI from happening. |