Abstract
This article explores views about older people and aging underpinning practices and perceptions of development and implementation of Artificial Intelligence (AI) in long-term care homes (LTC). Drawing on semi-structured interviews with seven AI developers, seven LTC staff, and four LTC advocates, we analyzed how AI technologies for later life are imagined, designed, deployed, and resisted. Using the concepts of “promissory discourse” and “aging anxieties”, we investigated manifestations of ageism in accounts of AI applications in LTC. Despite positive intentions, both AI developers and LTC staff/advocates engaged in simplistic scripts about aging, care, and the technological capacity of older people. We further uncovered what we termed sociotechnical ageism—a form that is not merely digital but rests on interacting pre-conceptions about the inability or lack of interest of older people to use emerging technologies coupled with social assumptions about aging, LTC, and technological innovation.
Keywords: ageism, technology, nursing homes, digital ageism, algorithmic bias, techno-solutionism, older people
What this paper adds
• Provides richer understandings of ageism and its diverse forms, namely about the underexplored intersections between ageism and AI technologies.
• Addresses a research gap on AI and ageism in Long-term Care homes (LTC), focusing on the perceptions of staff, advocates, and technology developers.
• Offers contextualized insights into how the links between techno-solutionism and ageism unfold across different stakeholder groups in LTC.
Applications of study findings
• As AI technologies become more pervasive in our societies, understanding age-related biases in AI development and deployment will help tackle potential social inequalities and marginalization in later life.
• Ageism is not only embedded in the design of AI technologies by developers but also in its understanding, implementation, application, and uptake by gerontology professionals, including LTC staff and advocates.
• Sociotechnical ageism demonstrates the need to address interacting social and technical stereotypes about older people, aging, care, and approaches to emerging technologies such as AI.
Introduction
The COVID-19 pandemic shone light on how ageism pervades our societies, impacting older people’s access to healthcare (Ayalon et al., 2020). One clear manifestation of ageism has been the disproportionate number of deaths in long-term care homes (LTC), with residents being abandoned or exposed to COVID-19 due to factors including inadequate staffing (Curryer & Cook, 2021). Concurrently, benevolent or compassionate ageism proliferated through sympathetic actions towards older people which, although well-intentioned, perpetuated simplistic narratives of later life—resulting in tougher restrictions and blanket policies for LTC (Curryer & Cook, 2021; Vervaecke & Meisner, 2021). But ageism—stereotypes, prejudice, and discrimination based on age (Butler, 1969)—in LTC is not new. Ageist views are frequently embedded in care practices and interactions of staff, residents, and family (Buttigieg et al., 2018). However, the pandemic reiterated the need for richer understandings of ageism in LTC, as systemic care failures and its consequences were exposed, reminding us of how we perceive and treat frail older people (Curryer & Cook, 2021).
To address these shortcomings of care provision exacerbated by COVID-19, technologists, policy-makers, and scholars have proposed a wider application of artificial intelligence (AI) technologies from robots to smart voice assistants (Chen, 2020). Robots have been used to assist with clinical procedures and residents’ hygiene; smart voice assistants have been deployed to facilitate residents’ interactions with services (weather, news) and family (Hsu, 2021; Neves & Omori, 2023). But as AI is implemented in LTC, we must consider whether and how their use may contribute to further ageism. Yet, there is little research on this topic.
LTC provide a pivotal context to investigate the links between ageism and AI. They are critical settings since LTC serve older people facing health and social vulnerabilities (e.g., comorbidity), who are likely to live there for extended periods, including until the end of their lives, heightening the risk for ageism (Buttigieg et al., 2018). A Royal Commission into Australian LTC (2021) showed the substandard care plaguing the sector, often stemming from ageist perceptions of older adults as expendable. Simultaneously, AI technologies have been heralded as solutions to tackle LTC’s inadequacies, from staff shortages to residents’ loneliness (Chen, 2020). While research on ageism and AI in LTC remains scant, studies on other novel technologies developed for these settings illustrate user scripts/templates of older people as passive or digitally unskilled, reinforcing age-based stereotypes, prejudice, and discrimination (Mannheim et al., 2021). Thus, understanding the connections between AI and ageism in LTC is not only timely but pressing. This article draws on interviews with AI developers and LTC staff and advocates in Australia to tease out aging perceptions and representations in the design and implementation of AI technologies.
Ageism and Artificial Intelligence
Research on AI and ageism is at an early stage, but knowledge on other emerging technologies can guide conjectures about its links and ramifications. General references to digital ageism, or age-related biases in technology design and usage, abound in the literature (Chu et al., 2022; Mannheim et al., 2021; Manor & Herscovici, 2021; Neves et al., 2012, 2023). While this work lacks a common conceptualization, digital ageism seems to encompass stereotypes, prejudice, and discrimination towards both older non-users and users of digital technologies. On the one hand, non-users of technology are seen—by technologists and society–as uninterested or incompetent, which neglects complex practices and meanings of use and non-use (Neves et al., 2012, 2018). For example, many older users label themselves as non-users because they feel they cannot fully utilize a device on their own (Neves et al., 2012). Furthermore, stereotypes of older non-users as technophobic are frequently internalized in later life, limiting technological use and adoption (Köttl et al., 2021; Neves & Mead, 2021). On the other hand, stereotypes about older users are prevalent too, resulting in their exclusion or deprioritization from digital platforms (Rosales & Fernández-Ardèvol, 2020).
Technologies designed specifically for older people are typically defined as “assistive,” catering to those living with illness (Neves & Omori, 2023). Thus, AI innovation for later life is strongly marketed towards LTC to address health and social problems of residents (Chen, 2020; Hsu, 2021). While AI can bring opportunities for LTC, it can intensify challenges such as ageism. A WHO (2022) report suggests ageism can be encoded in data to develop AI, since older people’s lower technological use may result in them “not being fully represented in the datasets used to train and validate AI algorithms, thereby rendering the technologies less specific for individual characteristics and needs” (p. 7). Research has confirmed that older people are significantly under-represented in datasets used to train AI, reflecting existing social biases (Park et al., 2021). As governments and health systems deploy AI to automate decision making, these technologies can impact the distribution of social resources (Chu et al., 2022; Neves & Omori, 2023). Through design, deployment, and usage, AI can exclude older people and further contribute to simplistic representations of later life. This may occur not only if AI datasets rely solely on healthy older people, but also if biases concerning technology incompetence—or biomedical approaches to aging as mere biophysical decline—are embedded in AI development and use (Chu et al., 2022; Neves & Omori, 2023). Studies on digital technologies, such as apps, have demonstrated the consequences of simplistic aging representations on technological development and implementation, including aggravating social marginalization (Neves & Mead, 2021; Neves & Vetere, 2019; Seifert et al., 2020; Sin et al., 2021). For instance, as illustrated by Mannheim and colleagues (2021), health professionals hold ageist perceptions of older people regarding their ability to use digital technologies for treatment, raising questions about how technological systems are deployed in care. This shows we must consider different groups of stakeholders when investigating how ageism manifests in care settings.
Therefore, it is critical to understand how technologists/developers and LTC staff and advocates view aging and AI. These views frame not only how AI technologies for LTC are imagined and developed, but also how they are implemented beyond what is coded into datasets. AI offers a relevant case study to explore ageism, as it is: 1) at the forefront of technological innovation, affecting how older people are seen and included/excluded in such processes, and 2) enveloped in narratives associated with science fiction and societal tropes (Elliott, 2019), providing insights into sociocultural views of technology and aging. Despite AI’s popularity, the concept of AI has been contested since its inception in the 1950s, with many definitions proposed (Elliott, 2019). Each attempt to define AI is “setting a frame for how it is understood, measured, valued, and governed” (Crawford, 2021, p. 7). While most researchers focus on technical or material aspects of AI, we cannot overlook its social dimensions (Crawford, 2021). How AI is comprehended, responded to, and deployed—and by whom—depends on wider socio-political structures and processes, including dominant societal values and norms. As such, this article considers both material and social dimensions of AI.
In this context, we explore aging perceptions and representations in relation to AI technologies by engaging different LTC stakeholders. To frame this exploration, we employ the concepts of “promissory discourse” of technology and “aging anxieties”, as described next.
Promissory Discourses and Aging Anxieties
We draw on ideas from sociology and science and technology studies (STS), namely, “promissory discourses” of technology and “aging anxieties” to explore narratives about AI technologies for later life care. Promissory discourse involves various forms of communication (written and verbal) that assure certain events will occur in the future (Petersen, 2018). This type of discourse has been studied in relation to digital health and the “anti-aging” market (Marent & Henwood, 2021; Petersen, 2018), demonstrating its value to ascertain the conceptions supporting how services/products are defined, marketed, and used. Examples of promissory discourse of technology include claims that AI can eliminate loneliness amongst LTC residents or free up staff’s time to dedicate to one-on-one care. We argue that it is a fruitful concept to study AI development and deployment in LTC, because it enlightens: 1) how such technologies are perceived in terms of outcomes and future implications by stakeholders, and 2) how those perceptions relate to beliefs and scripts about aging and later life.
We approach technology as a sociotechnical system—not just as a device—but based on intertwining social and technical dimensions rooted in perceptions, practices, and values of multiple actors and structures (Barbosa Neves et al., 2021; Crawford, 2021; Neves & Vetere, 2019). This sociotechnical constellation is complicated by the addition of care systems. These create new opportunities and challenges because people’s hopes for innovations that promise to improve their lives may be exploited by businesses (Petersen, 2018). Promissory statements about novel technologies can serve to commodify later life and ageism, through heightened anxieties about aging in a society obsessed with looking/being young, coupled with homogenous ideas of older people.
We, thus, combine promissory discourses with aging anxieties. Aging anxieties connect micro-level reactions (one’s process of growing older) with macro-level understandings about getting old and older people in general. The concept refers to apprehension, concern, and uncertainty about later life. It can be a form of ageism due to its connotation with biopsychosocial decline; yet we follow its original broad sense that includes uncertainty (e.g., about how one’s life will unfold), which might not necessarily entail ageist views (Petersen, 2018). We propose that aging anxieties can be particularly visible in LTC and associated technologies since health issues and disabilities become central and conspicuous. This has the potential to homogenize care and its recipients, and perpetuate ideas of later life deterioration and dependency. Ageism can intersect here with other marginalizing practices such as ableism, which corresponds to stereotypes, prejudice, and discrimination towards people living with disabilities or disabled people (Curryer & Cook, 2021).
We must then understand how those designing and developing technology, and those influencing its implementation and uptake, perceive its role in institutionalized care—and whether their perceptions reveal evidence of ageism. By combining the concepts of promissory discourse and aging anxieties to study perceptions of AI developers and LTC staff and advocates, we can help locate if/how ageism unfolds and becomes inscribed into scripts and scenarios of older users/recipients of care. Studies show these scripts and scenarios (or personas, tropes, representations) are entrenched in technology design, defining its applicability (Peine et al., 2021). Ageism can be generated by design, when inscribed into technology through ideas about users’ characteristics (Neves & Vetere, 2019). But it can also be produced by choices about technological implementation, use, or resistance. This underscores the importance of incorporating diverse stakeholders in research.
Methods
Qualitative design was employed to explore perceptions and uses of AI technologies in Australian LTC, through interviews. A purposive sample included AI developers and LTC practitioners. The latter encompassed LTC staff and advocates. Due to COVID-19 lockdowns and ongoing restrictions, we were unable to include residents. The project was approved by our university’s ethics committee (ID: 23715), and interviewees provided informed consent. The research team was interdisciplinary, bringing together sociology, neuroscience, ethics, and computer science researchers. Methods are reported using the Consolidated Criteria for Reporting Qualitative Research (Booth et al., 2014).
Participants and Data Collection
Eighteen semi-structured interviews were conducted in 2020. Participants comprised seven AI developers, seven LTC staff, and four advocates for LTC residents (see Table 1 with pseudonyms). Developers had worked on AI-based technologies for LTC. LTC staff were interested in AI or had prior experience with AI-based care. Advocates had been involved in AI systems, promoted assistive technologies, or employed/were interested in AI in LTC. Interviewees were recruited through email invitations sent to LTC providers, AI companies, and professional networks. Sample size was determined by qualitative guidelines for groups with certain homogeneity like occupation (e.g., n = 6–12; see Guest et al., 2006); by number of AI developers focusing on Australian LTC and advocates with experience on the topic that we were able to identify; and by access to LTC staff, hard-to-reach groups during COVID-19. As Australian LTC are undergoing structural changes due to a Royal Commission into the sector (a government-funded public inquiry), advocates are considered critical stakeholders (Royal Commission into Aged Care Quality and Safety, 2021). Advocates included former LTC staff, health/social workers, and a family member of a resident.
Table 1.
Participants and Stakeholder Groups.
Stakeholder Group | Pseudonym | Occupation | Age | Gender |
---|---|---|---|---|
AI Developers | Camille | Academic Researcher | 38 | W |
AI Developers | Gary | Academic Researcher/Founder of Tech Company | 60+ | M |
AI Developers | Jeff | Academic Researcher/Founder of Tech Company | 60+ | M |
AI Developers | Larry | Academic Researcher | 50+ | M |
AI Developers | Mark | Founder/CEO of Tech Company | 55 | M |
AI Developers | Don | Project Manager of Tech Company (Clinical Trial Lead) | 56 | M |
AI Developers | Ken | Founder/CEO of Tech Company | 50+ | M |
LTC Staff | Kathy | Quality Assurance Manager | 42 | W |
LTC Staff | Tom | Geriatrician | 73 | M |
LTC Staff | David | Innovation Manager | 49 | M |
LTC Staff | Nita | Clinical Manager | 38 | W |
LTC Staff | Jen | Innovation Manager | 40+ | W |
LTC Staff | Melissa | Lifestyle Coordinator | 39 | W |
LTC Staff | Brian | CEO | 43 | M |
LTC Advocate | Sam | Older People’s Health, Disease, and Illness Advocate | 37 | W |
LTC Advocate | Martha | Older People’s Health, Disease, and Illness Advocate | 56 | W |
LTC Advocate | Nicola | Technology Advocate for Older People with Disabilities | 40+ | W |
LTC Advocate (family) | Lisa | Daughter | 51 | W |
Flexible semi-structured interview guides were used, comprising open- and close-ended questions (see Supplemental Material), and allowing for new or follow-up questions. The guides were pilot tested through internal networks. Because of pandemic-related lockdowns, interviews were conducted online via Zoom (video conferencing platform). Interviews ranged from 1 to 2 hours (M = 60.46 minutes, SD = 20.70 minutes), and were conducted by author MO, experienced in qualitative gerontology. Interviews were recorded and transcribed verbatim via a transcription service.
Data Analysis
Through thematic narrative analysis, we focused on participants’ accounts and experiences of AI—while keeping each narrative intact and interpreted as a whole rather than reduced to a category list (Riessman, 2008). We opted for this analysis due to its richness to capture discourses, while preserving rich continuous accounts instead of coded fragments or short themes. The coding process identified thematic elements across participants, but was case-centered to enable a contextualization of each participant’s views and circumstances. After thoroughly reading transcripts, team members experienced with qualitative analysis (BBN, MO, AP, AC), conducted independent coding, “zoom[ing] in, identifying the underlying assumptions in each account and naming (coding them)” and illustrating “general patterns—range and variation—and the underlying assumptions of different cases” (Riessman, 2008, p. 57). Subsequently, the team collectively discussed interpretive consistencies/inconsistencies, agreeing on results. Thematic narrative analysis is guided by prior theoretical development (Riessman, 2008), but does not preclude researchers from considering new understandings. Hence, a mixed analytical approach was favored: both deductive (pre-determined “thematics”) and inductive (not pre-determined). For example, our initial analysis inductively identified ageism as a general theme amongst other themes. For this article, we deepened our analysis of the ageism theme, framing our coding through promissory discourses and aging anxieties. These conceptual frameworks facilitated a richer and nuanced exploration of ageism, particularly of its intersections with technology and aging in LTC. While this coding stage was mostly deductive, we kept an open approach to new concepts/understandings of the data. Upholding the qualitative ethos of this type of analysis, we cannot claim saturation (i.e., knowledge generation did not reach an endpoint) and followed the trustworthiness strategies outlined next to conclude our analytical process. Findings are reported using a rich narrative approach and not via codebooks or coding trees (Riessman, 2008).
Trustworthiness Strategies
Transcripts were analyzed separately by two team members, guided by our research aims. Results were then discussed and agreed upon by the whole team, reaching consensus when discrepancies emerged. The team included five researchers with PhDs (three women from diverse ethnic backgrounds, two white men), three sociologists working on technology and aging (BBN, AP, MO), one bioethicist/neuroscientist researching responsible technological innovation (AC), and one computer scientist advancing explainable AI (MV). Bridging social and computer sciences, we reflected on how our diverse knowledge and values influenced—but also enriched—data collection and analysis. Our reflexivity was cross-disciplinary, relying on dialectical pluralism, which is grounded on an ongoing dialogue between multiple scientific epistemologies/perspectives (Barbosa & Baecker, 2022). This dialogue took place via research meetings and was captured in written logs.
Findings
AI Promises and Solutions for Long-term Care Homes
All stakeholders portrayed or imagined AI in relation to a set of promises and solutions. Promissory discourses revolved around AI’s potential and impact in solving present and future LTC challenges in three main areas: 1) social connection, 2) monitoring and detection (falls prevention, sensors tracking unusual behavior or inactivity), and 3) clinical uses (pain assessment, medication adherence, diagnoses). Developers mentioned all three areas, whereas staff and advocates mostly focused on monitoring. Each area highlights particular types of promissory discourses.
Social Connection: Hope and Education
Developers saw AI—like smart voice assistants, chatbots, and robots—as answers to the “rising” loneliness and social isolation of residents. AI promises hope to address social issues. Camille, an AI developer and academic researcher, explained that AI can “bring in some hope,” because:
Most of them [residents] end up in being in a bed or in just a room and very few of them will have the mobility to move around and talk to people. When they reach the residential aged care 1 at that stage, what happens is that they know that this is the end of their life. One thing that we can do is to keep them connected to their loved ones and let them continue that social thing that they were having.
However, Camille reported that the hope of AI is restricted by LTC providers and residents’ families via pre-conceptions about technology.
Another promissory discourse embedded in developers’ accounts was that by educating people about AI, concerns about its uses for social connection would be minimized. For instance, while seeming to discount public concerns about the potential harms of AI, such as privacy threats or caring issues, Camille emphasized the importance of education to deconstruct “preconceived ideas that people have that technology can be harmful”.
LTC practitioners did not engage with this area, addressing AI mainly from a monitoring perspective as shown below.
Monitoring and Detection: Time and Interoperability
Concerning AI for monitoring and detection, developers, staff, and advocates shared common promissory discourses about how these technologies would automate tasks, allowing staff to increase their time with residents. Staff and advocates mentioned smart documentation, tracking and fall prevention technologies, and systems to lift residents. Their focus rested on how AI can be applied to automate such administrative and supportive tasks, freeing up their schedules for more individualized care. Yet, developers conveyed more complex solutions. For example, Ken, founder of a big data analytics company, developed an AI technology for LTC inspired by his wife, who died of illness in LTC. The technology uses smart voice assistance to monitor medications, mood, and other needs (emergency, falls), but can also provide opportunities for social connection. He tested the technology with “10 senior elderlies, and they love it.” While the device is commencing trials in LTC, the technology’s “potential is huge” to assist staff, “based on the many years’ experience of being in and out of those care centers”.
Ken described another promissory discourse that was visible across all developers but not referred by other stakeholders: the interoperability of AI to automate not one but various tasks, while communicating with other LTC technologies. Interestingly, staff noted during interviews that developers lack knowledge of systems in LTC, usually ignoring infrastructure limitations. But, for Ken, it is the risk aversion mindset of LTC that precludes AI uptake. He “encountered many negative comments. People don’t understand. But my intention is not trying to make money. My intention is trying to bring something, which is going to benefit those people because my family has been through that.” Ken is optimistic that COVID-19 has proven we need AI to disrupt LTC, while adding the government “spends too much money on aged care” and not enough on innovation.
Clinical Uses: Objectivity and Effectiveness
Clinical applications were primarily mentioned by developers, although briefly alluded to by a few staff and advocates in relation to dementia. Gary, with a background in medical sciences, developed a pain assessment tool using facial recognition analyses, voice, and behavioral algorithms to recognize and score pain levels. The application targets people with dementia, as they “can’t self-report pain,” addressing a significant problem:
By the year 2050, there will be well over 130 million people in the world who have dementia (…) artificial intelligence is not there to remove the need for a clinician. What it’s there to do is to remove some of the laborious tasks that people need to undertake or to give objectivity to particular assessment.
The objectivity of AI was a promissory discourse evident across all developers’ interviews. A minority recognized social biases in datasets, which they saw as minimal problems to be solved with more technological innovation. The same was reported regarding the effectiveness of AI versus human assessors. Gary elucidated his tool is necessary because professionals trained to do facial assessments cannot reach AI’s effectiveness. He admitted that “with all artificial intelligence, it will never be 100% accurate, but when you use it and you use it repeatedly, it is consistent.” Gary shared ideas for future development given the “enormous” potential of AI: “into the future, why wouldn’t you have elderly patients who have wearable devices or walk past smart mirrors and as they walk past the mirror, it’s able to do a whole range of biometric assessments to determine how that person is today.”
For Sam, a LTC advocate, AI can play a vital clinical role supporting people with dementia, since “They need prompting for absolutely everything and that’s just to get through their daily living”. However, for a few staff and advocates, general promises of AI’s objectivity and effectiveness brought concerns about care reliability and accountability: AI “gimmicks” replacing “human touch” and not tested with end-users could affect quality of care and caring responsibilities.
Aging Anxieties: From Systemic Issues to ‘Aging Crisis’
The AI promissory discourses outlined above were coupled with several aging anxieties. Accounts of systemic LTC issues were reported by all stakeholders, namely, concerning staff shortages. Whilst these issues rest on current challenges facing the sector and cannot be categorized as ageism, some of the narratives underpinning the need for AI in LTC unveiled aging anxieties linked to general ageist views and stereotypes. For instance, all developers warned about the unavailability of present and prospective carers due to an “ageing population crisis”, suggesting this could be addressed through technological solutions. These narratives implied that the aging population is a problem, overlooking the societal contributions that older people make. Narratives also accentuated techno-solutionism—the idea that AI can solve all social problems. Staff and advocates were more critical of cost-based approaches to aging but shared developers’ concerns about an aging population. Aging anxieties were strongly attached to dementia, with stakeholders often referencing it when discussing AIs applications for later life care. As highlighted by Sam, a LTC advocate, AI
“would probably play a really big role [in dementia] (…) robots really interactive may play a huge part in keeping those residents occupied or stimulated in one way or another (…) They may be so fixated on that alone that’s so much better than them just sitting quietly with their own thoughts, or whatever thoughts they may have left in their head.”
Another prominent aging anxiety amongst all stakeholders was about the technological skills and interest of older people, including current and future LTC residents and staff. This anxiety related to age-related stereotypes about present older generations and their distrust of and inability to use technology. For example, Mark, from a company specializing in LTC systems, elucidated:
Most people that are elderly themselves have, and I'm generalizing here, a distrust of technology in general, and their distrust of AI is even worse. They’ve been brought up in a generation [with] stupid sci-fi movies and books that always paint a negative picture, and that is bad.
LTC staff and advocates echoed similar perceptions. Quality business partner Kathy stated: “the generation that we’re going to have at the moment, they don’t like technology.” For Melissa, a LTC lifestyle coordinator: “we have older nurses and older population. I think you’ll struggle there [with AI], but younger generations will probably adapt”. Many interviewees saw generational differences in technology acceptance and skills—among residents and staff—as a current and future challenge. As Sam explained:
I’m a young person still and I’m passionate about aged care but it’s an area that I’ve chosen to nurse in. But most young people prefer to enter the hospital environment. Aged care generally gets staffed with the older generation of carers and change management is really difficult to do amongst those individuals. Younger people are always open to new ideas…
Aging anxieties captured these beliefs that as we grow older, we stop being receptive to novel ideas or technology.
Sociotechnical Ageism
Upholding our methodological approach meant remaining open to new concepts. Through promissory discourses and aging anxieties, we found a complex web of social and technological ageist narratives. This led us to the analytical concept of sociotechnical ageism—inspired by the use of the sociotechnical term in science and technology studies (STS)—to highlight the assemblage of social, technological, and physical/material contexts and processes beyond the “digital”. Such term further aligns with our definition of technology as more than a device/tool. Our results do not just imply a “digital” focus, but a much wider interplay of social and technical ideas of later life, as well as broader applications of technological systems that situate older users and their contexts in a one-dimensional, fixed, and passive light. The technical refers here to both technological structures (e.g., material and norms) and a general sense of technicalities (e.g., skills). For example, discourses permeating interviews homogenized older people in LTC through uniform descriptions of “residents”, underscoring their dependency, frailty, and lack of technological skills/interest. On the one hand, descriptions of residents’ technological ability rested on social scripts about aging, LTC, and health. On the other hand, general descriptions of residents relied on technical scripts about their technological status. This sociotechnical approach stresses that we cannot grasp technology and its impacts without understanding the social contexts where those technologies and its uses are embedded, and vice versa.
Alongside sociotechnical ageism, a few staff/advocates and developers questioned ageist views of residents. Jeff, a roboticist, stated “we don’t seem very committed, as a society, to spend resources for older people”, but then added that “as people grow older, they don’t want to spend much money.” Even when well-meaning, narratives were punctuated by stereotypes and othering language about older people (e.g., “them” and “elderly”), including amongst older interviewees. Benevolent (caring) perspectives were frequently accompanied by paternalistic discourses about aging and technology.
Taken together, results reveal intersecting promissory discourses of AI, aging anxieties, and sociotechnical ageism, as discussed next.
Discussion and Implications
Findings show how understandings of AI and later life are engrained in promissory discourses (i.e., potential roles and expectations of technology) and aging anxieties (i.e., concerns and uncertainties). These discourses and anxieties were mostly based on ageist stereotypes homogenizing aging and older recipients of care as passive, dependent, and inept. As demonstrated, it is not only technology developers who can hold prejudiced views of older people; gerontology professionals can also espouse persistent pre-conceptions, particularly concerning technological capacity (Mannheim et al., 2021).
Promissory discourses pervaded narratives about the potential of AI to, for example, bring hope and effectiveness to solve “critical” social and clinical problems, from growing older cohorts to dementia. These problems related to aging anxieties regarding the constraints of an aging population and the uncertainty of later life care. All developers engaged in problem-solving approaches, referring to existing and prospective issues AI can fix. Techno-solutionism pervaded their accounts, reducing social concerns through technology (Morozov, 2013). For instance, AI competence to alleviate loneliness was depicted through discounting a complex phenomenon that cannot be solved by just adding more technology (Barbosa Neves et al., 2021). LTC staff and advocates were more critical of AI, elaborating on its limits; however, all acknowledged the affordances of AI to promise automated tasks and free up staff’s time to dedicate to residents.
Like developers, LTC practitioners also displayed aging anxieties—from standard assessment of residents’ needs to biomedical representations of aging that reduced people to their illnesses. These reinforced some of the promissory discourses embraced by developers regarding care recipients. Whereas the diversity of LTC was sometimes alluded to, most statements assumed “typical” residents whose agency was seldom discussed. Furthermore, technological tropes were prominent concerning both older residents and older staff in terms of generational differences towards interest/skills in emerging technologies. This meant that, for most LTC staff/advocates, the promise of AI was contextualized within administrative tasks, as older staff would be unable to uptake complex AI. While all interviewees were sympathetic and caring in relation to older people, they used othering language (“them”) and drew on stereotypes about technological interest and ability in later life. Not all aging anxieties found can be framed as ageism, as some were based on systemic issues affecting the sector. But most drew on aged-related stereotypes of an aging population, dismissing the heterogeneity of later life and perpetuating basic scripts about LTC residents.
The confluence of broader social and technical ideas about older people’s skills, capacity, needs, contexts, and care suggests that, more than digital ageism, we encountered sociotechnical ageism. This ageism not only concerns the inability of older people to use technology or their technological exclusion it also captures the intertwining social (e.g., living settings and age-related norms) and technical (e.g., technological development, skills, and scripts) assumptions that characterize later life and the types of technologies that are imagined and designed to support LTC. The applications and narratives about AI seemed to concurrently rely on and lead to this wider form of ageism that goes beyond a device to encompass social and technical beliefs about aging, care, illnesses, and associated innovations. This ageism is not just digital—it rests on a dynamic intersection of social and technical dimensions shaping the views of those who design and inscribe an end-user script into algorithms, and of those who have the power to implement AI through choices, practices, and applications. Digital ageism has been employed, in a few instances, to capture general ageist attitudes, not just those related to technology (Chu et al., 2022). However, the focus on the digital often detracts from the social (Crawford, 2021). The sociotechnical approach helps ensure we integrate both by not neglecting the social dimensions of the technical and the technical dimensions of the social (Bijker, 1994; Crawford, 2021). This further allows to deconstruct the technological determinism that suffuses accounts of AI (Crawford, 2021; Neves & Omori, 2023), recognizing that it is not just a technology that can perpetuate ageism but the social and technical values, discourses, actions, and contexts influencing the development, implementation, and use of that technology.
Conclusion
This article innovatively applied promissory discourses and aging anxieties to the context of LTC and AI to tease out ageism. It also provided the broader concept of sociotechnical ageism to illuminate multiple social and technical intersections. By exploring aging perceptions in relation to AI technologies in LTC, we identified promissory discourses and aging anxieties permeating the narratives of stakeholders with a vital role in designing and deploying technology in later life. These narratives often resulted in sociotechnical ageism across AI developers and LTC staff and advocates. Understanding how this ageism unfolds—from development to implementation—is critical to enhance opportunities for socially just and inclusive AI. This requires tackling AI’s age-related biases and potential harmful effects, commonly neglected in AI scholarship, practice, and policy (Chu et al., 2022). Studies such as our own, relying on a multidisciplinary team of social, health, and computer scientists, can help enlighten the various sociotechnical biases that a sole disciplinary lens might miss. Despite offering in-depth qualitative insights on an underexplored topic, findings are limited by an Australian LTC focus and by missing residents’ perspectives, which would contribute to additional understandings of aging including potential internalization of ageism. Expanding the study of sociotechnical ageism to residents, other settings, and countries will test its applicability and implications.
Supplemental Material
Supplemental Material for Artificial Intelligence in Long-Term Care: Technological Promise, Aging Anxieties, and Sociotechnical Ageism by Barbara Barbosa Neves, Alan Petersen, Mor Vered, Adrian Carter, and Maho Omori in Journal of Applied Gerontology
Acknowledgments
The authors are grateful to all participants who took part in this project, to Monash Data Futures Institute, Dr. Claudia Del Campo Marin, and Sandra Sanders for their support. Authors also thank the editors of this special issue on ageism and the two anonymous reviewers for their constructive feedback that helped refine this article.
Note
Residential aged care facility (sometimes referred to as 'aged care') is the official term for LTC in Australia.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a Monash Data Futures Institute grant awarded to the project’s team (Seed grant) and by a Faculty of Arts grant (Carers Grant 2022) awarded to the first author.
IRB Approval: The study was approved by Monash’s University Human Research Ethics Committee, approval number 23715.
Supplemental Material: Supplemental material for this article is available online.
ORCID iDs
Barbara Barbosa Neves https://orcid.org/0000-0002-4490-4322
Mor Vered https://orcid.org/0000-0001-5286-6509
References
- Ayalon L., Chasteen A., Diehl M., Levy B., Neupert S., Rothermund K., Tesch-Römer C., Wahl H. (2021). Aging in times of the COVID-19 pandemic: Avoiding ageism and fostering intergenerational solidarity. The Journals of Gerontology: Series B, 76(2), e49–e52. 10.1093/geronb/gbaa051 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barbosa Neves B., Baecker R. (2022). Mixing methods and sciences: A longitudinal cross-disciplinary mixed methods study on technology to address social isolation and loneliness in later life. Journal of Mixed Methods Research, 16(1), 88–113. 10.1177/1558689820977646 [DOI] [Google Scholar]
- Barbosa Neves B., Waycott J., Maddox A. (2021). When technologies are not enough: The challenges of digital interventions to address loneliness in later life. Sociological Research. Advance online publication, 13607804211029298.
- Bijker W. E. (1994). In Law J. (Ed.), Shaping technology/building society: Studies in sociotechnical change. MIT press. [Google Scholar]
- Booth A., Hannes K., Harden A., Noyes J., Harris J., Tong A. (2014). COREQ. In Guidelines for reporting health research: A user’s manual (pp. 214–226). John Wiley and Sons. 10.1002/9781118715598.ch21 [DOI] [Google Scholar]
- Butler R. N. (1969). Age-Ism: Another form of bigotry. The Gerontologist, 9(4 Part 1), 243–246. 10.1093/geront/9.4_Part_1.243 [DOI] [PubMed] [Google Scholar]
- Buttigieg S., Ilinca S., Sao Jose J., Larsson A. (2018). Researching ageism in health-care and long-term care. In Contemporary perspectives on ageism (pp. 493–515). Springer. [Google Scholar]
- Chen L. K. (2020). Gerontechnology and artificial intelligence: Better care for older people. Archives of Gerontology and Geriatrics, 91, 104252. 10.1016/j.archger.2020.104252 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chu C. H., Nyrup R., Leslie K., Shi J., Bianchi A., Lyn A., McNicholl M., Khan S., Rahimi S., Grenier A. (2022). Digital ageism: Challenges and opportunities in Artificial Intelligence for older adults. The Gerontologist, 62(7), 947–955. 10.1093/geront/gnab167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crawford K. (2021). Atlas of AI. Yale University Press. [Google Scholar]
- Curryer C., Cook P. S. (2021). Counting the costs of ageism: Discrimination and COVID-19. Australasian Journal on Ageing, 40(3), 237–240. 10.1111/ajag.12993 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elliott A. (2019). The culture of AI: Everyday life and the digital revolution. Routledge. [Google Scholar]
- Guest G., Bunce A., Johnson L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. 10.1177/1525822x05279903 [DOI] [Google Scholar]
- Hsu E. (2021). Technogenarians: Ageing and robotic care. In The routledge social science handbook of AI (pp. 266–280). Routledge. [Google Scholar]
- Köttl H., Gallistl V., Rohner R., Ayalon L. (2021). “But at the age of 85? Forget it!” Internalized ageism, a barrier to technology use. Journal of Aging Studies, 59, 100971. 10.1016/j.jaging.2021.100971 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mannheim I., van Zaalen Y., Wouters E. (2021). Ageism in applying digital technology in healthcare: Implications for adoption and actual use. In Wouters E. (Ed.), Digital transformations in care for older people: Critical perspectives (pp. 72–90). Routledge. 10.4324/9781003155317-7 [DOI] [Google Scholar]
- Manor S., Herscovici A. (2021). Digital ageism: A new kind of discrimination. Human Behavior and Emerging Technologies, 3(5), 1084–1093. 10.1002/hbe2.299 [DOI] [Google Scholar]
- Marent B., Henwood F. (2021). Digital health. In Lyons A., Chamberlain K. (Eds.), International handbook of critical issues in health and illness (pp. 261–275). Routledge. 10.4324/9781003185215-24 [DOI] [Google Scholar]
- Morozov E. (2013). To save everything, click here: The folly of technological solutionism. Public Affairs. [Google Scholar]
- Neves B. B., Omori M. (2023, forthcoming). Artificial intelligence for long-term care in later life. In The handbook on the sociology of health and medicine. Edward Elgar. [Google Scholar]
- Neves B. B., Vetere F. (2019). Ageing and digital technology; designing and evaluating emerging technologies for older adults. Springer. [Google Scholar]
- Neves B. B., Amaro F. (2012). Too old for technology? How the elderly of Lisbon use and perceive ICT. The Journal of Community Informatics, 8(1), 1–12. 10.15353/joci.v8i1.3061 [DOI] [Google Scholar]
- Neves B. B., Colón Cabrera D., Sanders A., Warren N. (2023). Pandemic diaries: Lived experiences of loneliness, loss, and hope among older adults during COVID-19. The Gerontologist, 63(1), 120–130. 10.1093/geront/gnac104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Neves B. B., Mead G. (2021). Digital technology and older people: Towards a sociological approach to technology adoption in later life. Sociology, 55(5), 888–905. 10.1177/0038038520975587 [DOI] [Google Scholar]
- Neves B. B., Waycott J., Malta S. (2018). Old and afraid of new communication technologies? Reconceptualising and contesting the ‘age-based digital divide’. Journal of Sociology, 54(2), 236–248. 10.1177/1440783318766119 [DOI] [Google Scholar]
- Park J., Bernstein M., Brewer R., Kamar E., Morris M. (2021). Understanding the representation and representativeness of age in AI data sets. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, May 19-21, 2021.–834.–842. 10.1145/3461702.3462590 [DOI]
- Peine A., Marshall B., Martin W., Neven L. (2021). Socio-gerontechnology: Interdisciplinary critical studies of ageing and technology. Routledge. [Google Scholar]
- Petersen A. (2018). Capitalising on ageing anxieties: Promissory discourse and the creation of an ‘anti-ageing treatment’ market. Journal of Sociology, 54(2), 191–202. 10.1177/1440783318766165 [DOI] [Google Scholar]
- Riessman C. (2008). Narrative methods for the human sciences. SAGE. [Google Scholar]
- Rosales A., Fernández-Ardèvol M. (2020). Ageism in the era of digital platforms. Convergence, 26(5–6), 1074–1087. 10.1177/1354856520930905 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Royal Commission into Aged Care Quality and Safety . (2021). Final report. Commonwealth Government of Australia. https://agedcare.royalcommission.gov.au/sites/default/files/2021-03/final-report-volume-1_0.pdf [Google Scholar]
- Seifert A., Cotten S., Xie B. (2020). A double burden of exclusion? Digital and social exclusion of older adults in times of COVID-19. The Journals of Gerontology: Series B, 76(3), e99–e103. 10.1093/geronb/gbaa098 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sin J., Franz R., Munteanu C., Neves B. B. (2021). Digital design marginalization: New perspectives on designing inclusive interfaces. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, May 8-13, 2021.1–11. 10.1145/3411764.3445180 [DOI]
- Vervaecke D., Meisner B. A. (2021). Caremongering and assumptions of need: The spread of compassionate ageism during COVID-19. The Gerontologist, 61(2), 159–165. 10.1093/geront/gnaa131 [DOI] [PMC free article] [PubMed] [Google Scholar]
- WHO. (2022). Ageism in artificial intelligence for health. WHO. https://apps.who.int/iris/rest/bitstreams/1408281/retrieve [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental Material for Artificial Intelligence in Long-Term Care: Technological Promise, Aging Anxieties, and Sociotechnical Ageism by Barbara Barbosa Neves, Alan Petersen, Mor Vered, Adrian Carter, and Maho Omori in Journal of Applied Gerontology