Abstract
AI/ML increasingly impacts the ability of humans to have a good life. Various sets of indicators exist to measure well-being/the ability to have a good life. Students play an important role in AI/ML discussions. The purpose of our study using an online survey was to learn about the perspectives of undergraduate STEM students on the impact of AI/ML on well-being/the ability to have a good life. Our study revealed that many of the abilities participants perceive to be needed for having a good life were part of the well-being/ability to have a good life indicator lists we gave to participants. Participants perceived AI/ML to have and continue to have the most positive impact on the ability to have a good life for disabled people, elderly people, and individuals with a high income and the least positive impact for people of low income and countries from the global south. Regarding indicators of well-being and the ability to have a good life given to participants, we found a significant techno-positive sentiment. 30% of respondents selected the purely positive box for 28 of the indicators and none did so for the purely negative box. For 52 indicators, the purely negative was below 10% (not counting the 0%) and for 10 indicators, none selected purely negative. Our findings suggest that our questions might be valuable tools to develop an inventory of STEM and other students’ perspectives on the implications of AI/ML on the ability to have a good life.
Keywords: Good life, Ability, STEM, Undergraduate students, Artificial intelligence, Machine learning
Introduction
The ability to have a good life depends on many social parameters such as employment (Crow and Payne 1992), social status (Gehl and Ross 2013), geographical location (suburbs) (Greenbie 1969), food security (Neuwelt-Kearns et al. 2021), social norms (Hansen 2015), physical health, and socioeconomic status (Xu et al. 2020), caring (Colombo 2014), sustainable living (Hansen 2015), respecting (Malti et al. 2020), being respected (Steckermeier and Delhey 2019), and power over own one’s life and experience of discrimination (Holmström et al. 2017). The UN Convention on the Rights of Persons with Disabilities is a checklist to indicate actions needed to enable more opportunities for a good life for disabled people (Johnson 2013; Kakoullis and Johnson 2020) and the same is said for children and the UN Convention of the Child (Brusdal and Frønes 2014; Kutsar et al. 2019). What is seen as a good life has changed over time (Strachan 2010) and many views on what entails a good life exist (Beckman 2018). Many measures with various sets of social indicators exist that can be seen as measures of the ability to have a good life such as The Better Life Index, The Canadian Index of Wellbeing, The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix, The Social Determinants of Health (SDH) and others (Wolbring 2021). It is recognized that Artificial Intelligence (AI) and Machine Learning (ML) are impacting many facets of people’s ability to have a good life (Canadian Institute for Advanced Research (CIFAR) 2018; European Group on Ethics in Science and New Technologies 2018; Floridi et al. 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019). Increasing student’s social impact literacy is one goal of AI/ML education (Chiu et al. 2021; Furey and Martin 2019; Garrett et al. 2020; Touretzky et al. 2019) and STEM education (Josa and Aguado 2021; Kelley and Knowles 2016). To connect with the world of students, we phrased our social implication of AI/ML inquiry in the language of the ability to have a good life to gain knowledge on how STEM students perceive the societal impact of AI. We asked three questions: (1) What abilities do you see as important to have the ability to have a good life? What is the impact of AI/ML on the ability to have a good life for different social groups? (3) What is the impact of AI/ML on all the indicators from the four well-being/ability to have a good life composite measures: (a) The Better Life Index (OECD 2020), (b) The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011) and (d) The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020).
AI/ML and the ability to have a good life
It is argued that “AI impacts what we can consider the good life” Vesnic-Alujevic et al. (2020, p. 8), how we achieve “goals of wellbeing” and “overall common good” Vesnic-Alujevic et al. (2020, p. 8), and that the ability of a good life “must include an explicit conception of how to live well with technologies” and the 'good life’ means “a human future worth seeking, choosing, building, and enjoying” Vallor (2016, p. 12). It is argued that ethics of AI concerning “the question of the good life and human and societal flourishing” Coeckelbergh (2019, p. 33) is needed and that technological advancement should engage with “questions about the good life and discussions of values” Buhmann and Fieseler (2021, p. 4). The report Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research (Whittlestone et al. 2019) lists many research topics as essential which can be seen to impact the ability to have a good life. Justice, solidarity, equity, and equality are concepts mentioned in many AI governance documents that influence the ability to have a good life (Lillywhite and Wolbring 2020). AI/ML impact various forms of well-being that reflect facets of the ability to have a good life such as emotional well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Khosla and Chu 2013; West 2018), sense of well-being and identity (Abeles 2016), economic well-being (Borjas and Freeman 2019; Fratczak et al. 2019; West 2018), the general well-being of a nation's economy (Press 1982; Ullrich et al. 2016), well-being of society (Reddy 2006), and societal well-being (Aluaş and Bolboacă, 2019; National Academies of Sciences 2018). The report, ETHICALLY ALIGNED DESIGN A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems focuses on how well-being including “societal and environmental well-being” can be improved (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 1) and suggests various measures.
AI education
Most education on AI focuses on how AI works, hands-on education, and how to increase AI technical literacy (Chiu et al. 2021; Heintz 2021; Steinbauer et al. 2021). However, AI education including AI ethics education identifies goals to educate on the societal impact of AI (Chiu et al. 2021; Furey and Martin 2019; Garrett et al. 2020; Long and Magerko 2020; Touretzky et al. 2019), covering “bias, automation and robots, law and policy, consequences of algorithms, philosophy/morality, privacy, future of AI, and history of AI” Garrett et al. (2020, p. 274), and diversity and inclusion (Chiu et al. 2021). It is argued that “in order to address the social impact of technical systems, including AI, we need to revisit the way we think about the norms of AI ethics education, and in particular address the tendency towards an ‘exclusionary’ pedagogy, that further siloes” Raji et al. (2021, p. 515). Phrasing the impact of AI/ML within the language of the ability to have a good life may resonate with students and help AI education with its goal to educate on the societal impact of AI (Touretzky et al. 2019) including AI/ML and the social good (Hager et al. 2019).
Engineering education
Social implications are seen as being part of engineering education (Josa and Aguado 2021) and STEM education (Kelley and Knowles 2016) which is seen to have a social impact (Ramirez Velazquez 2021). The National Science Foundation (USA) acknowledges that “scientific merit is intertwined with broader impacts” and that “educational reforms must now center inclusion and equity” Elgin et al. (2021, p. 7). It is argued that the COVID-19 pandemic showed that academia has a responsibility to decrease structural inequities in education whereby one strategy could be to engage STEM students in research (Elgin et al. 2021). At the same time, it is noted that problems exist in STEM education (Garibay 2015; Josa and Aguado 2021). It is recognized that having a positive social impact entices students to enroll in STEM (Bennett et al. 2021) and a social awareness curriculum has an impact on the engineering identity formation of high school girls Burks et al. (2019, p. 1). Various studies investigated the social responsibility of engineering students, seeing it as important but reporting problems (Børsen et al. 2021; Canney and Bielefeldt 2015; Schiff et al. 2021; Tomblin and Mogul 2020). Literature exists covering the competencies students are to obtain from engineering degrees. The authors covering STEM in secondary education argue that OECD’s twenty-first century skills, which include ethical and social impact under the header of communication skills, “has become one of the most important questions awaiting for an answer all over the world” Korkmaz et al. (2021, p. 424). To phrase the impact of AI/ML within the language of the ability to have a good life could resonate with engineering students to engage with the social of AI/ML. Indeed, Plato’s 12 concepts of the ‘good life’ are suggested as a lens to think about the goals of engineers (Rodriguez-Nikl 2021).
Methods
Design
We performed a mixed-methods approach at the technique level (Sandelowski 2000). We used a directed content analysis of the qualitative data and frequency count and percentage measures of the descriptive quantitative data to analyze the answers of STEM students from one University to three questions: (1) What abilities do you see as important to have the ability to have a good life? (2) What is the impact of AI/ML on the ability to have a good life for different social groups? (3) What is the impact of AI/ML on all the indicators from: (a) The Better Life Index (OECD 2020), (b) The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011) and (d) The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020).
We chose an online survey to reach as many student participants as possible and to give students the flexibility to participate in this study at their convenience. The survey received ethics approval from the University of Calgary REB17-0785. The online survey was set in such a way that we could not identify the participants or their IP addresses. The consent form alerted participants that the US government could access data as survey monkey falls under US jurisdiction. Participants could stop the survey at any time and were free to choose which questions they want to answer or not.
Participants
Students were chosen as participants because student education is an important aspect of STEM degrees as well as AI/ML education. The STEM students we accessed were chosen as participants for convenience purposes. The survey was distributed to four cohorts of individuals from four different University of Calgary STEM-related groups engaged in STEM and engineering extracurricular activities. Our criteria for participant inclusion were that they had to be currently attending a Canadian University undergraduate or graduate studies program. The survey was designed to take students between 30 min and 1 h.
Survey question development
The full survey consisted of n = 23 questions including demographic, simple yes or no questions with the option for comments (questions 9, 13, 15, 16, 20–23), and open-ended questions (questions 11–12) to obtain more detailed views of participants. It was developed by both the authors and feedback was given by a group of students on the draft of the survey keeping in mind the focus of the study and the literature around AI/ML governance and ethics. We present here the results of a subset of questions namely: (a) demographics (questions 2–6); (b) participants views on the abilities needed for a good life (questions 11/12); (c) participants familiarity with AL/ML (question 14); (d) participants views on the impact of AI/ML on various social groups (questions 15/16); (e) participants views on the impact of AI/ML on indicators of the ability to have a good life from the measures: Social Determinants of Health, Better Life Index, Canadian Index of Wellbeing, and Community Based Rehabilitation Matrix (questions 20–23).
Data collection and analysis
We collected data through an online delivered survey using the Survey Monkey platform. We sent the link to the online survey to the students through personal contacts after ethics approval was received. The survey data were collected between March and April 2021. Quantitative data were extracted and analyzed using Survey Monkey’s intrinsic frequency distribution analysis capability. The qualitative data obtained from comment boxes that accompanied certain questions and the open-ended question were exported as one pdf file into ATLASti-9® software for analysis (Braun and Clarke 2013; Hsieh and Shannon 2005) and we performed a directed content analysis to understand better ability-related views and knowledge of participants. Regarding the analysis of the qualitative data, the two authors first familiarized themselves with the qualitative data by reading the whole PDF, then re-read the content identifying potentially meaningful data through performing thematic coding on the data (Clarke and Braun 2014). We engaged in peer debriefing (Guba 1981) and differences in codes and theme suggestions were discussed between the two authors and revised as needed. An audit trail was generated using memo and coding functions within ATLASti-9®.
Limitation
Given that we used an online delivered survey instrument, we could not ask for clarifications of answers. In addition, there might be a selection bias in the sense that only students that were already interested in the topic might have chosen to answer the survey. In addition, our high respondent number of females does not reflect the gender composition in engineering degrees. It might be that females in the clubs felt more drawn to fill out the survey. Our study design is an exploratory one and the intent was not to generate generalizable data. Indeed, our results suggest many follow-up studies to see what the answers are for different sets of participants for example such as students linked to other occupation areas such as science and technology studies, disability studies, ethics, and health sciences.
Results
The following sections will present the findings from participant responses. Section 3.1 gives the demographics, Sect. 3.2 students’ view on the ability to have a good life (questions 11/12); Sect. 3.3 STEM student’s perspectives on the impact of AI/ML on the ability to have a good life in the moment and the future (questions 15/16) and Sect. 3.4. STEM student’s perspectives on the impact of AI/ML on all indicators of four measures (Social Determinants of Health, the Better Life Index, the Canadian Index of Wellbeing, and the Community Based Rehabilitation Matrix) (questions 20–23).
Demographics
The response rate from the students we accessed reflects 13.14% (51 from 388) of the students in that setup. The participant population was composed of 91.67% females and 8.33% males. 97.92% were 18–30 years of age and 2.08% were 30–65 years of age. 97.92% of the participants were undergraduate students, while 2.08% were PhD students. More specifically, 27.08% were first year undergraduate students, 33.33% second year undergraduate students, 29.17% third year undergraduate students, and 8.33% fourth year undergraduate students. The population consisted of a majority STEM students, specifically 60.42% engineering students (6.25% biomedical engineering, 6.25% chemical engineering, 10.42% civil engineering, 6.25% electrical engineering, 18.75% mechanical engineering, 6.25% software engineering, and 6.25% common first year engineering), 2.08% computer science students, 2.08% mathematics and statistics students, 18.75% biological sciences, 4.17% health sciences, 4.17% neurological sciences, 2.08% physiology, 2.08% kinesiology, 2.08% business, and 2.08% other (dual degree in mechanical engineering and business). 89.66% of students felt somehow familiar with AI/ML.
Abilities needed to have a good life
93.10% of the participants believed they have a good life, while 6.90% suggested they do not know. No participants said they do not have a good life. n = 2 other participants indicated the subjectiveness of assessing abilities that are important to an individual.
P1: “The ability to have a good life is very subjective and kind of insinuates that you have to be happy all the time in life…. It represents the mindset of chasing happiness…”
As to concrete abilities needed to have a good life participants listed the following.
P4: “basic living essentials such as food and water”
N = 6 participants suggested basic needs are needed to have a good life.
P1: “Freedom of thought…”
P8: “freedom to move however, wherever and whenever you want (physical abilities)”
N = 5 participants suggested that freedom, of various forms including speech, physical abilities, goals, religion, etc., are needed to have a good life.
P21: “I do view my physical abilities as something that enables me to have a good life.”
N = 6 participants suggested that physical abilities are needed to have a good life.
P2: “living out your purpose”
N = 5 participants suggested that the ability to live out your purpose is needed to have a good life.
P20: “The ability to connect and form strong relationship with at least one other person.”
N = 7 participants suggested that forming relationships, social interactions, or receiving love is needed to have a good life.
P5: “Having a good life starts and ends with a mindset to achieve satisfaction and/or happiness.”
N = 11 participants suggested that mindset, contentment, and being happy is needed to have a good life.
Factors that impact the ability to have a good life
P20: “One’s socioeconomic status would also impact one’s ability to have a good life.”
N = 7 participants indicated that financial stability or socioeconomic status impacts one’s ability to have a good life.
P8: “Access to necessities such as food and water.”
N = 7 participants indicated that access to basic needs impacts one’s ability to have a good life.
P24: “My mental and physical health”
N = 11 participants indicated that mental well-being and mindset impacts one’s ability to have a good life. n = 4 participants indicated that physical health impacts one’s ability to have a good life. n = 3 participants generalized health as a factor that impacts one’s ability to have a good life.
P10: “the resources available to you in your hometown… the quality of the environment around you…”
N = 4 participants indicated that one’s location/country/environment impacts one’s ability to have a good life.
P6: “A good support system, and the people that surround me could contribute to attaining or not attaining stable mental health.”
P7: “Society, Peers, Family”
N = 9 participants indicated that relationships, support systems, and social interactions impact one’s ability to have a good life.
Impact of AI/ML on the ability to have a good life
Knowledge of AI/ML
When asked whether participants have knowledge of AI/ML, 62.07% of the participants said yes, 27.59% said somewhat, and 10.34% said no.
Impact of AI/ML on the ability to have a good life in the moment
Table 1 summarizes participants’ perspectives on the impact of AI/ML in the moment on various populations in society. The weighted average of the following groups, disabled people (7.07), the elderly (6.75), people of high income (7.63), and countries of the North (7.15), all lie above the weighted average of the other groups listed (not including responses for 0).
Table 1.
The impact of AI/ML on the ability to have a good life in the moment on the groups indicated
Group | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Weighted average with 0 | Weighted average without 0 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Myself | 7.14% | 0% | 3.57% | 3.57% | 0% | 28.57% | 17.86% | 17.86% | 10.71% | 3.57% | 7.14% | 5.82 | 6.27 |
Disabled People | 3.45% | 0% | 6.90% | 3.45% | 6.90% | 6.90% | 10.34% | 13.79% | 13.79% | 17.24% | 17.24% | 6.83 | 7.07 |
Women | 7.14% | 0% | 7.14% | 0% | 3.57% | 21.43% | 17.86% | 21.43% | 10.71% | 0% | 10.71% | 5.86 | 6.31 |
The elderly | 3.45% | 0% | 3.45% | 6.90% | 13.79% | 6.90% | 13.79% | 6.90% | 13.79% | 17.24% | 13.79% | 6.52 | 6.75 |
Youth | 6.90% | 0% | 3.45% | 6.90% | 10.34% | 24.14% | 13.79% | 6.90% | 3.45% | 10.34% | 13.79% | 5.79 | 6.22 |
Men | 10.34% | 0% | 0% | 3.45% | 0% | 37.93% | 17.24% | 13.79% | 10.34% | 3.45% | 3.45% | 5.48 | 6.12 |
Non-binary | 10.71% | 0% | 0% | 0% | 7.14% | 28.57% | 10.71% | 21.43% | 10.71% | 7.14% | 3.57% | 5.71 | 6.4 |
People of ethnic background (minority groups) | 6.90% | 3.45% | 3.45% | 3.45% | 17.24% | 10.34% | 13.79% | 13.79% | 10.34% | 6.90% | 10.34% | 5.69 | 6.11 |
People of low income | 3.45% | 6.90% | 3.45% | 13.79% | 17.24% | 24.14% | 6.90% | 6.90% | 10.34% | 0% | 6.90% | 4.86 | 5.04 |
People of high income | 6.90% | 0% | 3.45% | 0% | 3.45% | 13.79% | 6.90% | 6.90% | 17.24% | 20.69% | 20.69% | 7.10 | 7.63 |
Countries of the North | 10.34% | 3.45% | 0% | 0% | 3.45% | 17.24% | 6.90% | 10.34% | 27.59% | 3.45% | 17.24% | 6.41 | 7.15 |
Countries of the South | 10.34% | 3.45% | 0% | 3.45% | 10.34% | 31.03% | 13.79% | 13.79% | 3.45% | 0% | 10.34% | 5.21 | 5.81 |
Animals | 14.29% | 3.57% | 7.14% | 10.71% | 7.14% | 21.43% | 7.14% | 7.14% | 7.14% | 3.57% | 10.71% | 4.75 | 5.54 |
Nature | 14.29% | 10.71% | 3.57% | 7.14% | 7.14% | 21.43% | 14.29% | 7.14% | 7.14% | 0% | 7.14% | 4.39 | 5.13 |
0 being not impacted, 1 being purely negative, 2–4 being more negative than positive, 5 being equal positive and negative, 6–9 being mostly positive, and 10 being purely positive impact
Participants in first year of undergraduate studies perceive AL/ML to more positively impact all groups listed indicated by a high weighted average. Participants in subjects of study not related to engineering/technology including (software engineering, electrical engineering, mechanical engineering, biomedical engineering, and computer science) indicated slightly lower weighted averages for most groups listed above in comparison to all other fields of study.
Participants that elaborated on their choices indicated two main factors to support the increased weighted average of the four groups. n = 6 participants indicated that AI/ML creates wealth disparity and is more easily accessible to the wealthy and developed countries.
P11: “more privileged group will benefit while the non-privileged will get left behind.”
P6: “potential for negative impacts too… worsen the gap between the rich and poor.”
Second, participants referenced the benefits for disabled people.
P18: “allow paralyzed people to access better wheelchairs”
P5: “Artificial body parts”
P2: “potential to be the eyes and ears for people who have disabilities with their senses or neurological disabilities (for the elderly and disabled people).”
Impact of AI/ML on the ability to have a good life in the future
Table 2 summarizes the participants’ perspectives of AI/ML on the indicated groups in the future. Overall, the weighted averages of all the groups questioned are greater compared to participants perspectives of AI/ML in the moment (other than animals and nature). Disabled people and people of high income and countries of the North are still weighted higher than the other groups, but the elderly blended into the weighted averages of the other groups.
Table 2.
The impact of AI/ML on the ability to have a good life in the future on the groups indicated
Group | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Weighted average with 0 | Weighted average without 0 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Myself | 3.70% | 0% | 0% | 0% | 7.41% | 11.11% | 18.52% | 18.52% | 18.52% | 7.41% | 14.81% | 6.89 | 7.15 |
Disabled People | 0% | 3.57% | 0% | 0% | 10.71% | 7.14% | 10.71% | 14.29% | 17.86% | 17.86% | 17.86% | 7.29 | 7.29 |
Women | 3.57% | 0% | 3.57% | 0% | 14.29% | 14.29% | 17.86% | 25.00% | 3.57% | 7.14% | 10.71% | 6.18 | 6.41 |
The elderly | 3.57% | 0% | 3.57% | 7.14% | 3.57% | 17.86% | 10.71% | 28.57% | 7.14% | 7.14% | 10.71% | 6.25 | 6.48 |
Youth | 3.57% | 0% | 3.57% | 7.14% | 17.86% | 3.57% | 14.29% | 17.86% | 7.14% | 10.71% | 14.29% | 6.25 | 6.48 |
Men | 7.14% | 0% | 3.57% | 3.57% | 0% | 7.14% | 21.43% | 28.57% | 10.71% | 10.71% | 7.14% | 6.36 | 6.85 |
Non-binary | 10.71% | 0% | 0% | 3.57% | 14.29% | 10.71% | 10.71% | 28.57% | 3.57% | 10.71% | 7.14% | 5.82 | 6.52 |
People of ethnic background (minority groups) | 3.70% | 0% | 3.70% | 11.11% | 3.70% | 18.52% | 11.11% | 25.93% | 7.41% | 7.41% | 7.41% | 5.96 | 6.19 |
People of low income | 3.70% | 0% | 11.11% | 11.11% | 3.70% | 29.63% | 7.41% | 18.52% | 3.70% | 3.70% | 7.41% | 5.30 | 5.5 |
People of high income | 3.70% | 3.70% | 0% | 0% | 0% | 7.41% | 7.41% | 22.22% | 18.52% | 22.22% | 14.81% | 7.37 | 7.65 |
Countries of the North | 7.14% | 0% | 0% | 3.57% | 0% | 14.29% | 14.29% | 21.43% | 10.71% | 17.86% | 10.71% | 6.71 | 7.23 |
Countries of the South | 7.14% | 3.57% | 0% | 7.14% | 10.71% | 14.29% | 17.86% | 17.86% | 3.57% | 10.71% | 7.14% | 5.68 | 6.12 |
Animals | 11.11% | 3.70% | 11.11% | 7.14% | 3.70% | 22.22% | 11.11% | 7.41% | 3.70% | 11.11% | 7.41% | 4.96 | 5.58 |
Nature | 7.41% | 3.70% | 7.41% | 14.81% | 7.41% | 14.82% | 11.11% | 7.41% | 7.41% | 7.41% | 11.11% | 5.22 | 5.64 |
0 being not impacted, 1 being purely negative, 2–4 being more negative than positive, 5 being equal positive and negative, 6–9 being mostly positive, and 10 being purely positive impact
Participants in subjects of study not related to engineering/technology including (software engineering, electrical engineering, mechanical engineering, biomedical engineering, and computer science) indicated slightly lower weighted averages for most groups listed above in comparison to all other fields of study.
Similar perspectives are observed from participants’ comments as found in Table 1. Participants still see disabled people benefiting the most from AI/ML in the future. Further, participant comments suggested the future of AI/ML will accommodate all groups more equally.
P4: “…overall these new technologies should equally affect everyone…”
P13: “AI technology is studied further it can be improvised to accommodate everyone equally…”
Indicators of measures of well-being
Canadian Based Rehabilitation Matrix
Table 3 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Canadian Based Rehabilitation Matrix. Healthcare-related indicators and indicators related to assistance and disabilities including health promotion (46.43%), health prevention (46.43%), rehabilitation (46.43%), assistive technology (67.86%), personal assistance (51.58%), and disabled people’s organizations (50.00%) are seen to have a higher portion of only positive impacts than the other indicators. Indicators that suggest not impacted in a higher proportion relative to other groups include recreation (21.43%), sport (25.00%), and self-help (25.00%).
Table 3.
Impact of AI/ML on indicators of measure from the Canadian Based Rehabilitation Matrix
Indicator | Impacted only positive | Impacted only negative | Positive and negative | Not impacted |
---|---|---|---|---|
Health | 35.71% | 0% | 57.14% | 3.57% |
Healthcare | 35.71% | 10.71% | 46.43% | 3.57% |
Assistive technology/assistive technologies/assistive device | 67.86% | 3.57% | 17.86% | 7.14% |
Health promotion | 46.43% | 3.57% | 39.29% | 10.71% |
Health prevention | 46.43% | 3.57% | 32.14% | 17.86% |
Rehabilitation | 46.43% | 7.14% | 35.71% | 7.14% |
Education | 39.29% | 3.57% | 53.57% | 3.57% |
Childhood education | 25.00% | 7.14% | 60.71% | 3.57% |
Primary education | 25.00% | 3.57% | 60.71% | 7.14% |
Secondary education | 28.57% | 0% | 60.71% | 3.57% |
Non-formal education | 25.00% | 0% | 64.29% | 10.71% |
Life-long learning | 25.00% | 14.29% | 53.57% | 7.14% |
Livelihood | 28.57% | 10.71% | 57.14% | 3.57% |
Skills development | 25.00% | 14.29% | 57.14% | 0% |
Self-employment | 35.71% | 14.29% | 35.71% | 14.29% |
Financial stability | 22.22% | 14.81% | 51.86% | 7.41% |
Wage employment | 14.29% | 14.29% | 57.14% | 14.29% |
Social protection | 18.52% | 18.52% | 51.85% | 11.11% |
Social situation | 21.43% | 14.29% | 50.00% | 17.86% |
Social relationships | 17.86% | 17.86% | 50.00% | 14.29% |
Family | 17.86% | 14.29% | 50.00% | 14.29% |
Personal assistance | 51.58% | 7.41% | 40.74% | 0% |
Culture | 7.14% | 14.29% | 64.29% | 10.71% |
Arts | 11.11% | 18.52% | 51.85% | 11.11% |
Recreation | 21.43% | 10.71% | 42.86% | 21.43% |
Leisure | 17.86% | 10.71% | 50.00% | 17.86% |
Sport | 32.14% | 7.14% | 32.14% | 25.00% |
Access to justice | 22.22% | 11.11% | 51.85% | 14.81% |
Empowerment | 18.52% | 11.11% | 55.56% | 11.11% |
Communication | 37.00% | 14.81% | 44.44% | 7.41% |
Social mobilization | 25.00% | 10.71% | 50.00% | 10.71% |
Political participation | 14.29% | 7.14% | 53.57% | 17.86% |
Self-help | 25.00% | 7.14% | 39.29% | 25.00% |
Disabled people’s organizations | 50.00% | 3.57% | 42.86% | 3.57% |
When comparing participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) and other areas of study, those in AI/ML-related fields perceive the indicators mentioned about to have a greater positive impact only. Specifically, health promotion, assistive technology, personal assistance, and disabled organizations.
Canadian Index of Wellbeing
Table 4 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Canadian Index of Wellbeing. Data suggest that knowledge and living standards at 48.00% and 40.00%, respectively, are impacted more positively relative to the other indicators in the matrix. The indicators that are seen as not impacted at a higher percentage than other indicators include leadership (25.00%), leisure (24.0%), and time (41.67%). Overall, Table 4 suggests participants see most of the indicators as impacted both positively and negatively.
Table 4.
Impact of AI/ML on indicators of measure from the Canadian Index of Wellbeing
Indicator | Impacted only positive | Impacted only negative | Positive and negative | Not impacted |
---|---|---|---|---|
Social relationships | 16.00% | 16.00% | 48.00% | 16.00% |
Social engagement | 16.00% | 16.00% | 52.00% | 12.00% |
Social support | 24.00% | 4.00% | 60.00% | 12.00% |
Community safety | 28.00% | 4.00% | 56.00% | 8.00% |
Social norms | 20.00% | 8.00% | 52.00% | 16.00% |
Attitudes toward others | 20.00% | 8.00% | 56.00% | 12.00% |
Demographic engagement | 24.00% | 8.00% | 44.00% | 20.00% |
Participation | 32.00% | 8.00% | 40.00% | 20.00% |
Communication | 36.00% | 4.00% | 56.00% | 4.00% |
Leadership | 33.33% | 8.33% | 37.50% | 25.00% |
Education | 28.00% | 0% | 72.00% | 4.00% |
Competencies | 36.00% | 16.00% | 40.00% | 4.00% |
Knowledge | 48.00% | 4.00% | 40.00% | 8.00% |
Skill | 29.17% | 8.33% | 54.17% | 8.33% |
Environment | 28.00% | 12.00% | 44.00% | 20.00% |
Healthy population | 32.00% | 4.00% | 44.00% | 20.00% |
Personal well-being | 24.00% | 4.00% | 60.00% | 8.00% |
Physical health | 24.00% | 8.00% | 52.00% | 20.00% |
Life expectancy | 32.00% | 4.00% | 44.00% | 16.00% |
Mental health | 28.00% | 8.00% | 52.00% | 8.00% |
Functional health | 29.17% | 8.33% | 45.83% | 16.67% |
Public Health | 24.00% | 8.00% | 52.00% | 12.00% |
Culture | 25.00% | 4.17% | 66.67% | 4.17% |
Healthcare | 32.00% | 0% | 60.00% | 4.00% |
Culture | 12.50% | 8.33% | 58.83% | 20.83% |
Leisure | 28.00% | 12.00% | 40.00% | 24.00% |
Living standard | 40.00% | 4.00% | 40.00% | 16.00% |
Income | 28.00% | 4.00% | 44.00% | 24.00% |
Economic security | 24.00% | 8.00% | 44.00% | 24.00% |
Time | 20.83% | 12.50% | 29.17% | 41.67% |
Social determinants of health
Table 5 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Social Determinants of Health. As seen in Table 5, 50.00% of the participants indicated that health services will be impacted only positively, which is higher than all other indicators. Participants suggested that many of these indicators are not impacted. The following stand out: food security (42.31%), housing (36.00%), gender (42.31%), coping (34.62%), discrimination (34.62%), advocacy (33.33%), physical environment (37.04%), social engagement (30.77%), and social status (40.74%).
Table 5.
Impact of AI/ML on indicators of measure from the Social Determinants of Health
Indicator | Impacted only positive | Impacted only negative | Positive and negative | Not impacted |
---|---|---|---|---|
Income | 15.38% | 0% | 69.23% | 15.38% |
Education | 23.08% | 3.85% | 65.38% | 7.69% |
Unemployment | 23.08% | 23.08% | 42.31% | 11.54% |
Job Security | 11.54% | 26.92% | 50.00% | 11.54% |
Employment | 11.54% | 19.23% | 61.54% | 7.69% |
Early childhood development | 11.54% | 26.92% | 50.00% | 11.54% |
Food Insecurity | 19.23% | 11.54% | 30.77% | 42.31% |
Housing | 12.00% | 4.00% | 48.00% | 36.00% |
Social exclusion | 7.69% | 19.23% | 46.15% | 26.92% |
Social safety network | 16.00% | 8.00% | 52.00% | 24.00% |
Health services | 50.00% | 0% | 50.00% | 3.85% |
“Aboriginal” OR “first nations” OR “Metis” OR “indigenous people” OR “Inuit” | 7.69% | 11.54% | 57.69% | 23.08% |
Gender | 7.69% | 3.85% | 46.15% | 42.31% |
Disabled people | 34.62% | 3.85% | 50.00% | 11.54% |
Ethnic people | 15.38% | 3.85% | 57.69% | 23.08% |
Immigration | 19.23% | 11.54% | 53.85% | 19.23% |
Globalization | 30.77% | 7.69% | 50.00% | 11.54% |
Coping | 11.54% | 3.85% | 50.00% | 34.62% |
Discrimination | 0% | 19.23% | 42.31% | 34.62% |
Genetic | 19.23% | 7.69% | 50.00% | 23.08% |
Stress | 11.54% | 15.38% | 53.85% | 19.23% |
Transportation | 30.77% | 3.85% | 46.15% | 19.23% |
Vocational training | 23.08% | 3.85% | 46.15% | 26.92% |
Social integration | 14.81% | 3.70% | 62.96% | 18.52% |
Advocacy | 18.52% | 3.70% | 44.44% | 33.33% |
Literacy | 33.33% | 7.41% | 44.44% | 14.81% |
Walkability | 33.33% | 3.70% | 37.04% | 25.93% |
Physical environment | 14.81% | 7.41% | 40.74% | 37.04% |
Social engagement | 11.54% | 7.69% | 50.00% | 30.77% |
Social status | 3.70% | 3.70% | 51.85% | 40.74% |
Participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) perceived health services to be more positively impacted at 71.43% compared to fields that are not directly related to AI/ML (chemical engineering, civil engineering, common first year engineering, mathematics and statistics students, biological sciences, health sciences, neurological sciences, physiology, kinesiology, and business) at 42.11%
Better Life Index
Table 6 summarizes the participants’ perspective on the impact of AI/ML on the indicators of measure from the Better Life Index. Participants suggested that health is impacted more positively than the other indicators at 44.44% relative to the next highest at 29.63% (safety). This table suggests that participants see most of these indicators as impacted both positively and negatively.
Table 6.
Impact of AI/ML on indicators of measure from the Better Life Index
Indicator | Impacted only positive | Impacted only negative | Positive and Negative | Not impacted |
---|---|---|---|---|
Housing | 11.11% | 7.41% | 44.44% | 37.04% |
Income | 14.81% | 0% | 62.96% | 22.22% |
Jobs | 7.41%$ | 7.41% | 70.37% | 14.81% |
Community | 11.11% | 7.41% | 66.67% | 14.81% |
Education | 18.52% | 3.70% | 74.07% | 3.70% |
Environment | 11.11% | 3.70% | 59.26% | 25.93% |
Physical environment | 7.41% | 3.70% | 59.26% | 20.63% |
Civic environment | 7.41% | 7.41% | 66.67% | 18.52% |
Health | 44.44% | 0% | 51.86% | 3.70% |
Life satisfaction | 14.81% | 0% | 74.07% | 11.11% |
Safety | 29.63% | 0% | 62.96% | 7.415 |
Work life balance | 11.11% | 3.70% | 70.37% | 14.81% |
Participants that are studying fields related to AI/ML (software engineering, electrical engineering, biomedical engineering, mechanical engineering, and computer science) perceive health to be a more positively impacted indicator at 71.43% compared to fields that are not as directly related to AI/ML(chemical engineering, civil engineering, common first year engineering, mathematics and statistics students, biological sciences, health sciences, neurological sciences, physiology, kinesiology, and business) at 35.00%
Discussion
Our study revealed that many of the abilities participants perceive to be needed for having a good life were part of at least one of the four well-being indicator lists we gave to participants. Participants perceived AI/ML to have and continue to have the most positive impact on the ability to have a good life for disabled people, elderly people, and individuals with a high income and the least positive impact for people of low income and countries from the global south. As to indicators of well-being/the ability to have a good life given to participants, we found a mostly techno-positive sentiment. 28 indicators had 30% of respondents selected the purely positive box, none did so for the purely negative box. For 52 indicators, the purely negative was below 10% (not counting the 0%) and for 10 indicators, none selected purely negative. Our findings suggest that our questions might be valuable tools to develop an inventory of STEM and other students’ perspectives on the implications of AI/ML on the ability to have a good life.
Techno-positive and techno-optimistic sentiment of AI/ML impact on the indicators on the ability to have a good life
Our general techno-positive (in the moment) and techno-optimistic (perceived positive impact in the future) finding fits with the recognized techno-determinism and techno-optimism biased forms of reporting within the STEM education literature (Collett and Dillon 2019; Cormier et al. 2019; Garcia and Scott 2016; Vigdor 2011). It also fits with a study that found that the positive coverage was greater than the neutral and the negative coverage in the teaching of AI in technical studies (Table 5) (Gherheș and Obrad 2018, p. 8) which is the origin of our participants and that there was a techno-optimistic sentiment towards the social impact of AI development with technical studies (Gherheș and Obrad 2018, Table 10, p. 10). Our results also fit with a study that found a positive perception of the impact of AI on their well-being and society whereby a higher knowledge of AI correlated with the more positive sentiment toward the impact on themselves and society (Jeffrey 2020, p. 12).
Our findings might also be a consequence of what students can access in academic literature in the first place independent of a positive, neutral or negative tone of coverage. A recent study (Wolbring 2021) investigated the engagement of the academic literature focusing on AI/ML and other technologies with over 21 well-being measures finding that of the 353,233 abstracts that contained the terms artificial intelligence or machine learning none covered 14 of the 21 measures, 5 of the measures were mentioned in 5 or fewer abstracts, the phrase “social determinants of health” was present in 41 abstracts and the phrase “determinants of health” in 53 abstracts. Furthermore, the study (Wolbring 2021) found a very uneven coverage of the individual indicators of the measures (a) The Better Life Index, ( b) The Canadian Index of Wellbeing, (c) The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix and (d) The Social Determinants of Health (SDH) we gave our participants with few sources containing terms such as “social norms”, ‘social status”, “personal well-being”, “living standard”, and many others that could be used to discuss and trigger thinking about the impact of AI/ML on a good life.
Our techno-positive and techno-optimistic findings in relation to disabled people fits with and might be a consequence of AI being mostly mentioned within the AI and ML focused academic literature but also newspapers and Twitter tweets in relation to disabled people in a techno-positive and techno-optimistic way (Lillywhite and Wolbring 2020) and that disabled people are for the most part mentioned in the same literature with the imagery of the patient or benefiting user (Lillywhite and Wolbring 2019, 2020). A techno-positive and techno-optimistic tone does not lend itself to be nudged towards thinking about negative social implications for disabled people such as their ability of a good life, a possibility we know exists (Diep and Wolbring 2013, 2015; Lillywhite and Wolbring 2020; Nierling et al. 2018; Wolbring and Diep 2016; Yumakulov et al. 2012). However, this techno-positive and techno-optimism in our study was not limited to disabled people and as such our findings could be a consequence of a generally techno-positive and techno-optimistic exposure to AI/ML in the education of the participants or other sources through which they are informed on AI/ML. Interestingly, 42.31% indicated that there is no impact of AI/ML on gender (Table 5), which is surprising given that Amazon, for example, had to stop their Human resource AI due to a bias against women (Reuters 2018) and that over 90% of our participants were women.
Adding indicators for future studies
Many of the items our participants flagged as essential for living a good life are covered by the composite measures, we gave to the participants such as basic needs including food and water, forming relationships, social interactions, support system, financial stability, socioeconomic status, and various forms of health. Other social and ethical issues mentioned in the AI/ML literature could be added as indicators of the ability to have a good life, such as freedom of thought, to live out one’s purpose, mindset, contentment, being happy, privacy, data protection, technological deskilling (Vesnic-Alujevic et al. 2020), solidarity (European Group on Ethics in Science and New Technologies 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019), equity and equality (European Group on Ethics in Science and New Technologies 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019; Yuste et al. 2017), respecting (Malti et al. 2020), being respected (Steckermeier and Delhey 2019), dignity, health equity (Wolbring 2021), ethnic, gender, and social bias (Allen and Dreyer 2019; Pham et al. 2021; Straw 2020; Tat et al. 2020; Walsh 2019; Weissglass 2021), and various types of well-being that are noted to be impacted by AI/ML such as emotional well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Khosla and Chu 2013; West 2018), sense of well-being and identity (Abeles 2016), economic well-being (Borjas and Freeman 2019; Fratczak et al. 2019; Press 1982; Ullrich et al. 2016; West 2018) and societal well-being (Reddy 2006). Making one big list of the indicators allows for obtaining insight into the views of participants on the impact of AI/ML on the ability to have a good life and for that matter many other technologies.
Conclusion and future research opportunities
In the report Ethically aligned design A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019), it is stated
“To be able to contribute in a positive, non-dogmatic way, we, the techno-scientific communities, need to enhance our self-reflection. We need to have an open and honest debate around our explicit or implicit values, including our imagination around so-called “Artificial Intelligence” and the institutions, symbols, and representations it generates” (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 1).
The report furthermore argues that “ethical design of Autonomous and Intelligent Systems (A/IS) has to have provable improvements to societal well-being and that discussions on and mitigation of risks of potential negative long term effects on societal well-being are needed” (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019, p. 70). Our questions might be useful for the sentiment voiced in the report.
Our survey might be useful to achieve various goals from Table 1 of Tomblin and Mogul (2020) such as, “to go beyond technical narrowness of STEM education and embrace reflexive, critical systems thinking”, “cultivate social justice mindsets among STEM students who are yearning for this and may leave STEM in search of it”, “encourage students to become reflexive, empathetic data collectors who ask relevant STS questions of their work” and “create agents of change that explore alternative pathways for science and technology” (Tomblin and Mogul 2020, p. S120).
Our study suggests techno-positive and techno-optimistic sentiment of the students in relation to certain groups and the indicators of the ability to have a good life, but more studies are needed to compare our data with other sets of participants. Indeed, as a study showed that the teaching of AI in technical studies is more positive than in humanistic studies (Gherheș and Obrad 2018, Table 5, p. 8) and there was a more techno-optimistic sentiment towards social impact of AI in technical versus humanistic studies (Gherheș and Obrad 2018, Table 10, p. 10) it might be worthwhile to see whether students from humanistic studies are answering the questions differently.
It might also be beneficial to perform interviews instead of the online survey. One could ask participants to answer the same questions we did but with the opportunity for asking follow-up questions. For example, given participants answers in the “no impact” option in Tables 1, 2, 3, 4, 5 and 6 in our study, it would be useful to give the tables to participants to answer at the beginning of the interview and then ask participants for more clarifications related to the “no impact”. One could also focus on the answers to specific indicators of Tables 3, 4, 5 and 6 and ask participants for clarifications.
As to questions related to Tables 1 and 2, one could add more social groups to choose from and one could generate intersectional social groups. One could also be more differentiated with some of the social groups we used such as depicting different ethnic groups including Indigenous People instead of ethnic groups as one category. As for disabled people, one could differentiate based on why they are labeled as a disabled person as one can expect AI/ML visions to impact disabled people with different characteristics in different ways. It would be interesting how participants judge the impact on various groups of disabled people. It is recognized that data needs to disaggregate for different groups of disabled people (Bureau of Labor Statistics United States Department of Labor (USA) 2020; International Disability Alliance (IDA) 2017; Washington Group on Disability Statistics 2020; Wolbring and Lillywhite 2021). Given that various academic degrees and programs focus on different social groups, one could give the questions linked to Table 1 and 2 and the answer options to students of various degrees and programs to see whether the answers are different; for example, would students in women studies and disability studies answer the questions differently in general and especially in relation to women and disabled people respectively. One could design a study which would ensure a more even gender distribution so one could compare for example whether male and females fill out the Tables 1 and 2 differently.
As to the questions related to Tables 3, 4, 5 and 6, one could use all the indicators but also use different groups of participants or the same so STEM students but another group of STEM students and see whether the key trajectories are the same. One could design a study which would ensure a more even gender distribution so one could compare for example whether male and females fill out the Tables 3, 4, 5 and 6 differently. One could also ask participants to answer Tables 3, 4, 5 and 6 for different social groups to see whether indicator-specific differences appear based on social groups one uses as a lens.
Given that there are communities of academics, policymakers, practitioners, and others actively linked to The Better Life Index (OECD 2020), The Canadian Index of Wellbeing (Canadian Index of Wellbeing Organization 2019), The World Health Organization initiated Community Based Rehabilitation (CBR) Matrix (World Health Organization 2011), and The Social Determinants of Health (SDH) (Raphael et al. 2020; World Health Organization 2020), studies could be done with our questions within these communities to engage with the impact of AI/ML on the ability to have a good life. Our survey questions could be given to students in course segments that cover these measures and indicators, and the surveys could be used in AI/ML and STEM education to ascertain the student’s perceptions of the impact now and in the future of AI/ML.
One could also add indicators to the question list based on existing AI/ML and other relevant literature. Instead of giving four different sets based on existing composite measures, one could simply do one table with a list of primary and secondary indicators and with that add indicators do not present in the list we used. One could also use other composite measures that exist such as “The Disability and Wellbeing Monitoring Framework” (Fortune et al. 2020).
Our study was the first to our knowledge to engage STEM students by asking students about abilities they see as essential for having a good life and linking it to the social impact of AI/ML using well-being indicators. We used the indicators of well-being to make the ability to experience a good life more real for participants. There are many ability-related concepts in ability studies, such as ability security, ability identity security, ability expectation oppression, ability privilege, ability discrimination, ability inequity, ability inequality, and ability expectation creep (Wolbring 2020; Wolbring and Ghai 2015), that could be used to make the linkage between the well-being indicators and AI/ML impacts on a good life clearer.
Acknowledgements
We would like to thank the students that gave their precious time to take part in our study.
Data availability statement
No data access provided for reviewers beyond what is in the main document. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate STEM students.
Declarations
Conflict of interest
On behalf of all the authors, the corresponding author states that there is no conflict of interest.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Abeles TP. Send in the robots. On the Horizon. 2016;24(2):141–144. doi: 10.1108/OTH-07-2015-0031. [DOI] [Google Scholar]
- Allen B, Dreyer K. The role of the ACR data science institute in advancing health equity in radiology. J Am Coll Radiol. 2019;16(4):644–648. doi: 10.1016/j.jacr.2018.12.038. [DOI] [PubMed] [Google Scholar]
- Aluaş M, Bolboacă SD. Is the biggest problem of health-related artificial intelligence an ethical one? Appl Med Inf. 2019;41:3–3. [Google Scholar]
- Beckman L (2018) The liberal state and the politics of virtue. 10.4324/9781351325448
- Bennett D, Knight E, Bawa S, Dockery AM. Understanding the career decision making of university students enrolled in STEM disciplines. Aust J Career Dev. 2021;30(2):95–105. doi: 10.1177/1038416221994312. [DOI] [Google Scholar]
- Borjas GJ, Freeman RB. From immigrants to robots: the changing locus of substitutes for workers. RSF. 2019;5(5):22–42. doi: 10.7758/rsf.2019.5.5.02.pdf. [DOI] [Google Scholar]
- Børsen T, Serreau Y, Reifschneider K, Baier A, Pinkelman R, Smetanina T, Zandvoort H. Initiatives, experiences and best practices for teaching social and ecological responsibility in ethics education for science and engineering students. EJEE. 2021;46(2):186–209. [Google Scholar]
- Braun V, Clarke V. Successful qualitative research: a practical guide for beginners. Sage; 2013. [Google Scholar]
- Brusdal R, Frønes I (2014) Well-being and children in a consumer society. In: Handbook of child well-being: theories, methods and policies in global perspective, pp 1427–1443. 10.1007/978-90-481-9063-8_58
- Buhmann A, Fieseler C. Towards a deliberative framework for responsible innovation in artificial intelligence. Technol Soc. 2021;64:101475. doi: 10.1016/j.techsoc.2020.101475. [DOI] [Google Scholar]
- Bureau of Labor Statistics United States Department of Labor (USA) (2020) The employment situation—February 2020. Bureau of Labor Statistics, United States Department of Labor. https://www.bls.gov/news.release/pdf/empsit.pdf. Accessed 26 Dec 2021
- Burks G, Clancy KB, Hunter CD, Amos JR. Impact of ethics and social awareness curriculum on the engineering identity formation of high school girls. Educ Sci. 2019;9(4):250. doi: 10.3390/educsci9040250. [DOI] [Google Scholar]
- Canadian Index of Wellbeing Organization (2019) What is Wellbeing? Canadian Index of Wellbeing. https://uwaterloo.ca/canadian-index-wellbeing/what-wellbeing. Accessed 26 Dec 2021
- Canadian Institute for Advanced Research (CIFAR) (2018) AI & Society. Canadian Institute for Advanced Research. https://www.cifar.ca/ai/ai-society. Accessed 26 Dec 2021
- Canney NE, Bielefeldt AR. Differences in engineering students’ views of social responsibility between disciplines. J Profession Issues Eng Educ Pract. 2015 doi: 10.1061/(ASCE)EI.1943-5541.0000248. [DOI] [Google Scholar]
- Chiu TK, Meng H, Chai C-S, King I, Wong S, Yam Y (2021) Creation and evaluation of a pretertiary artificial intelligence (AI) curriculum. https://arxiv.org/abs/2101.07570. Accessed 26 Dec 2021
- Clarke V, Braun V. Thematic analysis. In: Teo T, editor. Encyclopedia of critical psychology. Springer; 2014. pp. 1947–1952. [Google Scholar]
- Coeckelbergh M (2019) Artificial Intelligence: Some ethical issues and regulatory challenges. Technology and Regulation, 2019 pp 31–34. 10.26116/techreg.2019.003
- Collett C, Dillon S (2019) AI and gender: four proposals for future research. The Leverhulme Centre for the Future of Intelligence, University of Cambridge. http://lcfi.ac.uk/media/uploads/files/AI_and_Gender___4_Proposals_for_Future_Research_210619_p8qAu8L.pdf. Accessed 26 Dec 2021
- Colombo M. Caring, the emotions, and social norm compliance. J Neurosci Psychol Econ. 2014;7(1):33–47. doi: 10.1037/npe0000015. [DOI] [Google Scholar]
- Cormier D, Jandrić P, Childs M, Hall R, White D, Phipps L, Truelove I, Hayes S, Fawns T. Ten years of the postdigital in the 52group: reflections and developments 2009–2019. Postdigital Science and Education. 2019;1:475–506. doi: 10.1007/s42438-019-00049-8. [DOI] [Google Scholar]
- Crow SM, Payne D. Affirmative action for a face only a mother could love? J Bus Ethics. 1992;11(11):869–875. doi: 10.1007/BF00872366. [DOI] [Google Scholar]
- Diep L, Wolbring G (2013) Who needs to fit in? Who Gets to stand out? Communication Technologies Including Brain-Machine Interfaces Revealed from the Perspectives of Special Education School Teachers Through an Ableism Lens. Edu Sci 3(1):30–49
- Diep L, Wolbring G (2015) Perceptions of Brain-Machine Interface Technology among Mothers of Disabled Children. Disabil Stud Q. 10.18061/.v35i4.3856
- Elgin, S. C., Hays, S., Mingo, V., Shaffer, C. D., & Williams, J. (2021). Building Back More Equitable STEM Education: Teach Science by Engaging Students in Doing Science. bioRxiv. Accessed 26 December 2021
- European Group on Ethics in Science and New Technologies (2018) Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems. European Commission. https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdfCommission. Accessed 26 Dec 2021
- Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Mind Masch. 2018 doi: 10.31235/osf.io/2hfsc. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fortune N, Badland H, Clifton S, Emerson E, Rachele J, Stancliffe RJ, Zhou Q, Llewellyn G. The Disability and Wellbeing Monitoring Framework: data, data gaps, and policy implications. Aust N Z J Public Health. 2020;44(3):227–232. doi: 10.1111/1753-6405.12983. [DOI] [PubMed] [Google Scholar]
- Fratczak P, Goh Y M, Kinnell P, Soltoggio A, Justham L (2019) Understanding human behaviour in industrial human-robot interaction by means of virtual reality. ACM International Conference Proceeding Series. November 2019 Article No.: 19, pp 1–7. 10.1145/3363384.3363403
- Furey H, Martin F. AI education matters: a modular approach to AI ethics education. AI Matters. 2019;4(4):13–15. doi: 10.1145/3299758.3299764. [DOI] [Google Scholar]
- Garcia P, Scott K (2016) Traversing a political pipeline: An intersectional and social constructionist approach toward technology education for girls of color. stelar.edc.org. http://stelar.edc.org/sites/stelar.edc.org/files/Garcia%20%26%20Scott%202016.pdf. Accessed 26 Dec 2021
- Garibay JC. STEM students’ social agency and views on working for social change: Are STEM disciplines developing socially and civically responsible students? JRScT. 2015;52(5):610–632. [Google Scholar]
- Garrett N, Beard N, Fiesler C (2020) More than "If Time Allows" the role of ethics in AI education. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society pp 272–278. 10.1145/3375627.3375868
- Gehl L, Ross H (2013) Disenfranchised spirit: a theory and a model. Pimatisiwin 11(1):31–42. https://ezproxy.lib.ucalgary.ca/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=fph&AN=91533986&site=ehost-live
- Gherheș V, Obrad C. Technical and humanities students’ perspectives on the development and sustainability of artificial intelligence (AI) Sustainability. 2018;10(9):3066. doi: 10.3390/su10093066. [DOI] [Google Scholar]
- Greenbie BB (1969) Reports and comments. Land Econ 45(3):359
- Guba EG. Criteria for assessing the trustworthiness of naturalistic inquiries. Educ Tech Res Dev. 1981;29(2):75–91. [Google Scholar]
- Hager GD, Drobnis A, Fang F, Ghani R, Greenwald A, Lyons T, Parkes DC, Schultz J, Saria S, Smith SF (2019) Artificial intelligence for social good. arXiv.org. https://arxiv.org/ftp/arxiv/papers/1901/1901.05406.pdf. Accessed 26 Dec 2021
- Hansen KB. Exploring compatibility between “Subjective Well-Being” and “Sustainable Living” in Scandinavia. Soc Indic Res. 2015;122(1):175–187. doi: 10.1007/s11205-014-0684-9. [DOI] [Google Scholar]
- Heintz F (2021) Three Interviews About K-12 AI Education in America, Europe and Singapore. KI - Künstliche Intelligenz: Vol. 35, No. 2. Springer. (S. 233-237). 10.1007/s13218-021-00730-w
- Holmström IK, Kaminsky E, Höglund AT, Carlsson M. Nursing students' awareness of inequity in healthcare—an intersectional perspective. Nurse Educ Today. 2017;48:134–139. doi: 10.1016/j.nedt.2016.10.009. [DOI] [PubMed] [Google Scholar]
- Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
- International Disability Alliance (IDA) (2017) Joint statement by the disability sector: disability data disaggregation. International Disability Alliance (IDA). https://www.internationaldisabilityalliance.org/data-joint-statement-march2017. Accessed 26 Dec 2021
- Jeffrey T (2020) Understanding College Student Perceptions of Artificial Intelligence. International Institute of Informatics and Cybernetics. http://www.iiisci.org/journal/PDV/sci/pdfs/HB785NN20.pdf. Accessed 26 Dec 2021
- Johnson K. The UN convention on the rights of persons with disabilities: a framework for ethical and inclusive practice? Ethics Soc Welf. 2013;7(3):218–231. doi: 10.1080/17496535.2013.815791. [DOI] [Google Scholar]
- Josa I, Aguado A. Social sciences and humanities in the education of civil engineers: Current status and proposal of guidelines. J Clean Prod. 2021;311:127489. doi: 10.1016/j.jclepro.2021.127489. [DOI] [Google Scholar]
- Kakoullis E, Johnson K (2020) Conclusion recognising human rights in different cultural contexts. In: KE, JK (eds) Recognising human rights in different cultural contexts. Palgrave Macmillan, pp 377–385, 10.1007/978-981-15-0786-1_17
- Kelley TR, Knowles JG. A conceptual framework for integrated STEM education. Int J STEM Educ. 2016;3(1):1–11. doi: 10.1186/s40594-016-0046-z. [DOI] [Google Scholar]
- Khosla R, Chu MT. Embodying care in matilda: An affective communication robot for emotional wellbeing of older people in Australian residential care facilities. ACM Trans Manag Inf Syst. 2013;4(4):18. doi: 10.1145/2544104. [DOI] [Google Scholar]
- Korkmaz Ö, Çakir R, Erdoğmuş FU. Secondary school students’ basic STEM skill levels according to their self-perceptions: a scale adaptation. Particip Educ Res. 2021;8(1):423–437. doi: 10.17275/per.21.25.8.1. [DOI] [Google Scholar]
- Kutsar D, Soo K, Strózik T, Strózik D, Grigoraș B, Bălțătescu S. Does the realisation of children’s rights determine good life in 8-year-olds’ perspectives? A comparison of Eight European Countries. Child Indic Res. 2019;12(1):161–183. doi: 10.1007/s12187-017-9499-y. [DOI] [Google Scholar]
- Lillywhite A, Wolbring G. Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people. Assist Technol. 2019 doi: 10.1080/10400435.2019.1593259. [DOI] [PubMed] [Google Scholar]
- Lillywhite A, Wolbring G. Coverage of artificial intelligence and machine learning within academic literature, canadian newspapers, and twitter tweets: the case of disabled people. Societies. 2020;10(1):1–27. doi: 10.3390/soc10010023. [DOI] [Google Scholar]
- Long D, Magerko B (2020) What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human factors in computing systems. pp 1–16. 10.1145/3313831.3376727
- Malti T, Peplak J, Zhang L. The development of respect in children and adolescents. Monogr Soc Res Child Dev. 2020;85(3):7–99. doi: 10.1111/mono.12417. [DOI] [PubMed] [Google Scholar]
- National Academies of Sciences . The frontiers of machine learning: 2017 raymond and beverly sackler US-UK Scientific Forum. National Academies of Sciences Engineering, Medicine; 2018. [Google Scholar]
- Neuwelt-Kearns C, Nicholls A, Deane KL, Robinson H, Lowe D, Pope R, Goddard T, van der Schaaf M, Bartley A. The realities and aspirations of people experiencing food insecurity in Tāmaki Makaurau. Kotuitui. 2021 doi: 10.1080/1177083X.2021.1951779. [DOI] [Google Scholar]
- Nierling L, João-Maia M, Hennen L, Bratan T, Kuuk P, Cas J, Capari L, Krieger-Lamina J, Mordini E, Wolbring G (2018) Assistive technologies for people with disabilities Part III: Perspectives on assistive technologies. European Parliament. http://www.europarl.europa.eu/RegData/etudes/IDAN/2018/603218/EPRS_IDA(2018)603218(ANN3)_EN.pdf. Accessed 26 Dec 2021
- OECD (2020) OECD Better Life Index. http://www.oecdbetterlifeindex.org/#/11111111111. Accessed 26 Dec 2021
- Pham Q, Gamble A, Hearn J, Cafazzo JA. The need for ethnoracial equity in artificial intelligence for diabetes management: review and recommendations [Review] J Med Internet Res. 2021;23(2):e22320. doi: 10.2196/22320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Press F. Science and Technology in the 1980s. Trans R Soc Canada Ottawa. 1982;20:105–116. [PubMed] [Google Scholar]
- Raji ID, Scheuerman MK, Amironesei R (2021) You can't sit with us: exclusionary pedagogy in AI ethics education. In: Proceedings of the 2021 ACM Conference on fairness accountability and transparency, pp 515–525
- Ramirez Velazquez M (2021) Not Just Teaching How: Supporting a Culture Shift in STEM Education. Bryn Mawr College. https://scholarship.tricolib.brynmawr.edu/handle/10066/23046. Accessed 26 Dec 2021
- Raphael D, Bryant T, Mikkonen J, Raphael A (2020) Social Determinants of Health: The Canadian Facts https://thecanadianfacts.org/. Accessed 26 Dec 2021
- Reddy R. Robotics and intelligent systems in support of society [Review] IEEE Intell Syst. 2006;21(3):24–31. doi: 10.1109/MIS.2006.57. [DOI] [Google Scholar]
- Reuters (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. Accessed 26 Dec 2021
- Rodriguez-Nikl T (2021) Technology uncertainty and the good life: a stoic perspective. In: Pirtle Z, Tomblin D, Madhavan G (eds) Engineering and Philosophy. Philosophy of Engineering and Technology, vol 37. Springer, Cham. 10.1007/978-3-030-70099-7_11
- Sandelowski M. Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-method studies. Res Nurs Health. 2000;23(3):246–255. doi: 10.1002/1098-240X(200006)23:3<246::AID-NUR9>3.0.CO;2-H. [DOI] [PubMed] [Google Scholar]
- Schiff DS, Logevall E, Borenstein J, Newstetter W, Potts C, Zegura E. Linking personal and professional social responsibility development to microethics and macroethics: observations from early undergraduate education. J Eng Educ. 2021;110(1):70–91. doi: 10.1002/jee.20371. [DOI] [Google Scholar]
- Steckermeier LC, Delhey J. Better for everyone? Egalitarian culture and social wellbeing in Europe. Soc Indic Res. 2019;143(3):1075–1108. doi: 10.1007/s11205-018-2007-z. [DOI] [Google Scholar]
- Steinbauer G, Kandlhofer M, Chklovski T, Heintz F, Koenig S. A differentiated discussion about AI education K-12. KI-Künstliche Intell. 2021;35:1–7. doi: 10.1007/s13218-021-00724-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strachan G. Still working for the man? Women's employment experiences in Australia since 1950. Aust J Soc Issues. 2010;45(1):117–130. doi: 10.1002/j.1839-4655.2010.tb00167.x. [DOI] [Google Scholar]
- Straw I. The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. Artif Intell Med. 2020;110:101965. doi: 10.1016/j.artmed.2020.101965. [DOI] [PubMed] [Google Scholar]
- Tat E, Bhatt DL, Rabbat MG. Addressing bias: artificial intelligence in cardiovascular medicine [Note] Lancet Digit Health. 2020;2(12):e635–e636. doi: 10.1016/S2589-7500(20)30249-1. [DOI] [PubMed] [Google Scholar]
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) ETHICALLY ALIGNED DESIGN A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&utm_term=undefined.. Accessed 26 Dec 2021
- Tomblin D, Mogul N. STS Postures: responsible innovation and research in undergraduate STEM education. J Responsib Innov. 2020;7(sup1):117–127. doi: 10.1080/23299460.2020.1839230. [DOI] [Google Scholar]
- Touretzky D, Gardner-McCune C, Breazeal C, Martin F, Seehorn D. A year in K-12 AI education. AI Mag. 2019;40(4):88–90. [Google Scholar]
- Ullrich D, Diefenbach S, Butz A (2016. Murphy Miserable robot—a companion to support children's wellbeing in emotionally difficult situations. In: Conference on Human Factors in Computing Systems—Proceedings
- Vallor S (2016) Introduction: envisioning the good life In the 21st century and beyond. Santa Clara University. https://scholarcommons.scu.edu/cgi/viewcontent.cgi?article=1060&context=phi. Accessed 26 Dec 2021
- Vesnic-Alujevic L, Nascimento S, Polvora A. Societal and ethical impacts of artificial intelligence: critical notes on European policy frameworks. Telecommun Pol. 2020;44(6):101961. doi: 10.1016/j.telpol.2020.101961. [DOI] [Google Scholar]
- Vigdor L. A techno-passion that is not one: Rethinking marginality, exclusion, and difference. Int J Gend Sci Technol. 2011;3(1):4–37. [Google Scholar]
- Walsh T. Australia's AI future. RSNSW. 2019;152(Part 1):101–104. [Google Scholar]
- Washington Group on Disability Statistics (2020) Disability Measurement and Monitoring using the Washington Group Disability Questions. Washington Group on Disability Statistics. https://www.washingtongroup-disability.com/fileadmin/uploads/wg/Documents/WG_Resource_Document__4_-_Monitoring_Using_the_WG_Questions.pdf. Accessed 26 Dec 2021
- Weissglass DE. Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- and middle-income countries. Bioethics. 2021 doi: 10.1111/bioe.12927. [DOI] [PubMed] [Google Scholar]
- West DM (2018) The future of work: Robots, AI, and automation. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85055018914&partnerID=40&md5=86237b8da3f84fe6de1db9c2905619b2
- Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf. Accessed 26 Dec 2021
- Wolbring G (2020) Ability expectation and Ableism glossary. Wordpress. https://wolbring.wordpress.com/ability-expectationableism-glossary/. Accessed 26 Dec 2021
- Wolbring G. Auditing the impact of neuro-advancements on health equity. J Neurol Res. 2021 doi: 10.14740/jnr695. [DOI] [Google Scholar]
- Wolbring G, Diep L. Cognitive/neuroenhancement through an ability studies lens. In: Jotterand F, Dubljevic V, editors. Cognitive enhancement. Oxford University Press; 2016. pp. 57–75. [Google Scholar]
- Wolbring G, Ghai A. Interrogating the impact of scientific and technological development on disabled children in India and beyond. Disabil Glob South. 2015;2(2):667–685. [Google Scholar]
- Wolbring G, Lillywhite A. Equity/equality, diversity, and inclusion (EDI) in universities: the case of disabled people. Societies. 2021;11(2):49. doi: 10.3390/soc11020049. [DOI] [Google Scholar]
- World Health Organization (2011) About the community-based rehabilitation (CBR) matrix. World Health Organization. http://www.who.int/disabilities/cbr/matrix/en/. Accessed 26 Dec 2021
- World Health Organization (2020) Social determinants of health. World Health Organization. https://www.who.int/social_determinants/en/. Accessed 26 Dec 2021
- Xu X, Zhao Y, Xia S, Cui P, Tang W, Hu X, Wu B. Quality of life and its influencing factors among centenarians in Nanjing, China: a cross-sectional study. Soc Indic Res. 2020 doi: 10.1007/s11205-020-02399-4. [DOI] [Google Scholar]
- Yumakulov S, Yergens D, Wolbring G. Imagery of disabled people within social robotics research. In: Ge S, Khatib O, Cabibihan J-J, Simmons R, Williams M-A, editors. Social robotics. Springer; 2012. pp. 168–177. [Google Scholar]
- Yuste R, Goering S, Bi G, Carmena JM, Carter A, Fins JJ, Friesen P, Gallant J, Huggins JE, Illes J. Four ethical priorities for neurotechnologies and AI. Nat News. 2017;551(7679):159. doi: 10.1038/551159a. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No data access provided for reviewers beyond what is in the main document. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate STEM students.