Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 12.
Published in final edited form as: Gerontologist. 2022 Jan 20;62(7):947–955. doi: 10.1093/geront/gnab167

Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults

Charlene H Chu 1,2,*, Rune Nyrup 3, Kathleen Leslie 4, Jiamin Shi 1,5, Andria Bianchi 5,6, Alexandra Lyn 4, Molly McNicholl 7,8, Shehroz Khan 2,9, Samira Rahimi 10,11, Amanda Grenier 12,13
PMCID: PMC9372891  EMSID: EMS144964  PMID: 35048111

Abstract

Artificial intelligence (AI) and machine learning are changing our world through their impact on sectors including health care, education, employment, finance, and law. AI systems are developed using data that reflect the implicit and explicit biases of society, and there are significant concerns about how the predictive models in AI systems amplify inequity, privilege, and power in society. The widespread applications of AI have led to mainstream discourse about how AI systems are perpetuating racism, sexism, and classism; yet, concerns about ageism have been largely absent in the AI bias literature. Given the globally aging population and proliferation of AI, there is a need to critically examine the presence of age-related bias in AI systems. This forum article discusses ageism in AI systems and introduces a conceptual model that outlines intersecting pathways of technology development that can produce and reinforce digital ageism in AI systems. We also describe the broader ethical and legal implications and considerations for future directions in digital ageism research to advance knowledge in the field and deepen our understanding of how ageism in AI is fostered by broader cycles of injustice.

Keywords: Bias, Gerontology, Machine learning, Technology


The intersection of an aging population with rapid technological advancements has given rise to novel considerations in the realm of Artificial Intelligence (AI). As defined by Russel and Norvig, AI is the "study of agents that receive percepts from the environment and perform actions" (Russell & Norvig, 2010, p. viii).

Current research examining biases in AI is largely focused on racial and gender biases and the serious consequences that arise as a result (Zhavoronkov et al., 2019); however, little attention has been paid to age-related bias (known as ageism) in AI (Butler, 1969). Ageism is a societal bias conceptualized as (a) prejudicial attitudes toward older adult populations and the process of aging, (b) discriminatory practices against older adults, and/or (c) institutionalized policies and social practices that foster these attitudes and actions (Rosales & Fernandez-Ardevol, 2020; Wilkinson & Ferraro, 2002). The pervasiveness of ageism has been highlighted in the coronavirus disease 2019 (COVID-19) pandemic where older adults were considered to be the most sick and vulnerable population (Vervaecke & Meisner, 2021). A report from the World Health Organization (WHO) and United Nations (UN) calls for urgent action to combat ageism due to its negative impacts on well-being, premature death, and higher health costs (WHO & UN, 2021). As noted in the WHO and UN (2021) report, scarce health care resources are sometimes allocated based on age, which means that an individual’s age may influence whether or not they receive an essential health intervention(s). With biases in AI recognized as a critical problem requiring urgent action, it is essential to invest in evidence-based strategies to prevent and tackle age-related bias in AI systems. These strategies can inform future legal and social policy developments to help mitigate this bias and advance social equity. In this Forum, we introduce the term digital ageism that we define as age bias in technology such as AI and discuss the mechanisms that lead to biases in AI systems. In the subsequent sections, we describe ageism in AI systems, broader ethical and legal implications, and considerations for future directions in research.

Biases in AI Systems

AI has experienced exceptional advancements in its ability to learn and reason and accordingly has been described as the "fastest-moving technology" (Brown, 2020). As a tool, there are no inherent limits to the potential range of uses for AI. At their most fundamental, AI tools work by subjecting large data sets—the bigger the better—to rapid machine learning algorithms capable of pattern recognition, statistical correlation, prediction, inference, and problem-solving (Presser et al., 2021). A recent report indicates that a "digital world" of more than 2.5 quintillion bytes of data is produced each day (O’Keefe et al., 2020). As a result of its immense capability to process data for predictive modeling, AI has been touted for its transformative potential and has become increasingly salient as a matter of public and political interest. The ability of AI to supplement human decision making at super speed and on a large population or global scale positions AI to fundamentally change the nature of the global economy (Margetts & Dorobantu, 2019; Presser et al., 2021).

Notwithstanding its immense promise, AI applications released to the public are not free from racial and gender biases (Chen, Szolovits & Ghassemi, 2019; Howard & Borenstein, 2018). For instance, a widely deployed AI algorithm was shown to underestimate the health risks of Black patients compared to White patients (Obermeyer et al., 2019). The algorithm’s prediction was based on individuals’ health care costs, but it failed to consider the primary cause of Black patients’ lower spending on health care which is reduced health care access due to systemic racism. Other instances of racial bias include AI systems assigning longer jail sentences to Black inmates (Angwin et al., 2016) and imprecise facial recognition algorithms misidentifying Black faces at a 5 times higher rate than White faces (Simonite, 2019). AI bias against women has also been identified with serious socioeconomic consequences including women being less likely to receive job search advertisements for high-paying positions (Dastin, 2018) and job discrimination (Datta et al., 2015). This bias can be attributed to the way AI’s predictive algorithms learn from not only quantitative data but also text (i.e., corpus), which insidiously encodes historical–cultural associations that result in semantic biases, such as associations between stereotypical male names and working in the labor force or, conversely, female names and family/child-rearing (Caliskan et al., 2017).

One of the earliest definitions of bias in computer systems refers to a system’s ability to "systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others. A system discriminates unfairly if it denies an opportunity or a good or if it assigns an undesirable outcome to an individual or group of individuals on grounds that are unreasonable or inappropriate" (Friedman & Nissenbaum, 1996, p. 332). Two unique types of undesirable outcomes can result from algorithmic bias: harms of allocation and harms of representation (Crawford, 2017). Harms of allocation refer to the distribution of resources and opportunities. This includes opportunities like when to be released on bail, receiving notification about potential job prospects, and access to health care resources or services. In contrast, harms of representation refer to how different groups or identities are represented and perceived by society. It is important to note that the underlying causes of these types of harms are complex. While technical factors, such as biased data and design choices, play an important role, biases can also arise from the context of use, for example, how human users interpret system outputs or from a mismatch between the capabilities and values assumed in the design of the system and those of its actual users (Danks & London, 2017; Friedman & Nissenbaum, 1996). These contextual factors can reflect underlying individual and social biases from as early on as technology inception, like who is involved in the design of technologies and the assumptions they make about end-users, to technology use by end-users, who have discrepancies in resources and capabilities to use existing technologies that affect what kind of data (and about whom) is readily collected. All of these factors are in turn shaped by both the allocative and representational effects of existing technologies, potentially creating a "cycle of injustice" (Whittlestone et al., 2019), where technological, individual, and social biases interact to produce and mutually reinforce each other (Figure 1). In the literature examining biases in AI, age-related bias is seldom discussed in comparison to racial and gender biases. It is time to critically reflect on and consider the experience of ageism in AI: the process of growing old in an increasingly digital world that directly and insidiously reinforces social inequities, exclusion, and marginalization. The next sections will focus on the digital divide, cycles of injustice that reinforce ageism, and the ethical and legal aspects of digital ageism.

Figure 1. Cycles of injustices in how technology is developed, applied, and understood by members of society (Whittlestone et al., 2019).

Figure 1

Ageism and the Digital Divide

Both the development and use of technology have excluded older adults, producing a "physical-digital divide," which exists when a group feels ostracized when they are unable to engage with the technologies being used around them (Ball et al., 2017). The social exclusion of older adults from the development and use of digital platforms results in data symptomatic of age-related bias in AI (Rosales & Fernandez-Ardevol, 2020; Wilkinson & Ferraro, 2002). There is a misconception that older adults are a homogenous group of people who are "in decline," incompetent, and in need of younger people’s guidance when it comes to technology (Mannheim et al., 2019). Furthermore, these paternalistic stereotypes and patronizing sentiments contribute to harmful compassionate ageism—"stereotypes concerning older persons that have permeated public rhetoric" (Binstock, 1983)—which is then reinforced and internalized by older adults (Vervaecke & Meisner, 2021). Internalized negative stereotypes can cause older adults to experience a decline in cognitive (e.g., memory) and psychological performances (Hehman & Bugental, 2015; Hess et al., 2003).

Furthermore, in a society where AI is becoming increasingly prevalent, older adults are at risk of further social exclusion and retrogression due to a digital divide (Rosales & Fernandez-Ardevol, 2020). The risk of a gap or distinction that delineates this aging population according to those with access to information technology and those without grows as technology advances (Srinuan & Bohlin, 2011). While older adults are using technology in greater numbers (Anderson et al., 2017) and benefitting from technology use (Anguera et al., 2017; Cotten et al., 2011; Czaja et al., 2018; Decker et al., 2019; Harerimana et al., 2019; Hurling et al., 2007; Irvine et al., 2013; Tomasino et al., 2017; White et al., 2002), they continue to be the least likely age cohort to have access to a computer and the internet due to physical barriers (e.g., physical disability) and/or psychological factors (e.g., lack of confidence to technology use; Anderson et al., 2017; Tomasino et al., 2017). One report from the European Union indicates that one third of older adults report never using the internet (Anderson et al., 2017). A survey of 17 European countries showed that internet use in older adults varied depending on location and age with the rates of internet nonusers increasing with each decade of age (König et al., 2018). Results show that 52% of individuals 65 years and older were internet nonusers and the percentage of internet nonusers increased to 92% in those 80–84 years old, indicating that "many older Europeans do not use the Internet and are particularly affected by the digital divide" (König et al., 2018, p. 626). Similarly, in Toronto, Canada, residents aged 60 and older report having lower rates of access to home internet compared to younger residents, with those who have access experiencing internet speeds below the Canadian national target of 50 Mbps (Andrey et al., 2021). Additionally, almost one third (30%) of this older adult cohort lack a device through which they can connect to the internet (Andrey et al., 2021). Older people may also experience more disparities in material access to technologies, education, and support to learn new technology (Ball et al., 2017; Cronin, 2003; Lagacé et al., 2015). For some older adults, the challenge to learn to use technology and the fear that technology will fail to work when most needed can be stressful (Cotten et al., 2011).

Ageist Cycles of Injustice in Digital Technologies

The barriers to technological access outlined above provide insight as to possible explanations for the exclusion of older adults from the research, design, and development process of digital technologies (Baum et al., 2014; Kanstrup & Bygholm, 2019; Lagacé et al., 2015). Older adults are sometimes referred to as "invisible users" in the literature alluding to their exclusion in the process of technology design that makes their interests and values invisible (Kanstrup & Bygholm, 2019; Rosales & Fernandez-Ardevol, 2019). Their perspectives are unlikely or inaccurately taken into consideration during technology design or product development which are activities dominated by younger people. Research by Charness (1990, 1992, 2009, 2020) highlights a misalignment of person-system fit that is generated when normative age-related changes, like in perception, cognition, and psychomotor abilities, are not accounted for which contributes to older adults’ low adoption rates and sub-optimal user experiences. The impact of this mismatch will be intensified over time as society transitions to an increased use of technology (e.g., health care technologies, information and communication technologies) which leaves older adults further behind from a technology-enabled world.

Additionally, ageist attitudes (Abbey & Hyde, 2009), which manifest in marketing and research studies (Ayalon & Clemens, 2018), influence the design of technology through a historical exclusion of older adults, particularly at arbitrary upper age limits (50+ or 60+) (Mannheim et al., 2019). The perception of older adults as a homogenous group potentially results in a loss of recognizing the nuanced needs of older people. Moreover, a disproportionate amount of information technology targets older adults specifically for health care and chronic disease management (Mannheim et al., 2019), rather than for leisure, joy, or fun. The underlying assumption of this phenomenon is that older adults are unhealthy and that managing health conditions is the only reason that they may seek to use and benefit from technology. This assumption could consequently create a feedback loop that reinforces negative stereotypes. Specifically, if most technologies marketed toward older adults are designed to resolve or manage health problems, then this could easily reinforce the impression that older adults are mainly unhealthy, in need of support, and/or in decline. There is evidence of significant age bias as demonstrated by Díaz et al. (2018) who used sentiment analysis on a large corpus of text data from Wikipedia, Twitter, and web crawling the internet. Díaz et al. (2018) found age-related bias with respect to explicit and implicit encoded ageist stereotypes. For example, sentences containing "young" were 66% more likely to be scored positively than the same sentences containing "old" when controlling for other sentential content, and in their analysis of word embedding to explore implicit bias, they found "youth" was associated with words like "courageous" and the words "old" and "older" were associated with "stubborn" and "obstinate." Another effect is that the data collected from these technologies end up representing only a segment of older adults with health issues. This selection bias does not enable technologies to capture the heterogeneity of the aging population, causing a mismatch between targeted technology such as AI and the actual needs of older adults (Crawford, 2017).

Taken together, there is not enough data from older adults available for training AI models, and the corpus that is available shows an explicit and implicit age-related bias (Díaz et al., 2018). Problems arise when the corpus may be mined by algorithms to understand attitudes toward or about products or services, and the "sentiment output is less positive simply because the sentences describe an older person taking part in an interaction" (Díaz et al., 2018, p. 9). This can result in further bias that leads to nongeneralizable AI models and the development of future AI systems that ignore the use, interests, and values of older adults while reinforcing or amplifying existing disadvantages (Coiro, 2003). In addition, this bias could influence or reduce the products or services targeted for older individuals (Díaz et al., 2018).

AI systems can produce and reinforce ageist biases through multiple pathways. Addressing bias requires a deeper understanding of how ageism fits into a broader cycle of injustice as illustrated in Figure 2. Existing stereotypes of older adults as unhealthy and/or technologically incompetent (Representation) affect the assumptions made about older adults, which can lead to the exclusion of older adults from research and design processes (Design). Ageist stereotypes are further reinforced by the fact that new information technologies for older adults mostly focus on health and health care management (Design/Technology). The digital divide (Allocation), together with patterns in existing applications, results in data sets that inaccurately represent healthy older adults (Technology). These biased data sets incentivize further technology development that primarily focuses on health care needs (Design). The limited availability of digital technologies serving other needs, interests, and aspirations of older adults can further entrench the digital divide (Allocation).

Figure 2. How cycles of injustice in digital technologies result in digital ageism.

Figure 2

In this way, new systems reinforce inequality and magnify societal exclusion for subsects of the population who are considered a "digital underclass" (Petersen & Bertelsen, 2017), primarily made up of older, poor, racialized, and marginalized groups. This raises questions about how older adults are included and viewed in our increasingly digital world, and how our societal structures that enforce ageism are represented in AI systems. There is a pressing need to address these foundational questions especially with the surge of digital technology use during the COVID-19 pandemic (De’ et al., 2020).

Ethical and Legal Implications of Ageism in AI

Ageism is an overlooked bias within AI ethics. This is evident upon our search of the AI Ethics Guidelines Global Inventory (AlgorithmWatch, 2021), a repository that compiles documents about how AI systems can conduct ethical automated decision making. Most of these guidelines highlight fairness as a key governing ethical principle; fairness typically incorporates considerations of equity and justice. In the repository, there are 146 documents created by government, private, civil society, and international organizations, which are accessible and available in English. The research team searched these documents for the terms ageism and similar concepts like age bias, age, old/older, senior(s), and elderly. We found that only 34 (23.3%) of these documents mention ageism as a bias for a total of 53 unique mentions. Of these, 19 (54.7%) merely listed "age" as part of a general list of protected characteristics. For example, the UNI Global Union Top 10 Principles for Ethical AI (2018) states "In the design and maintenance of AI, it is vital that the system is controlled for negative or harmful human bias, and that any bias—be it gender, race, sexual orientation, age, etc.—is identified and is not propagated by the system" (p. 8). Only 12 (8.2%) of the examined documents provided slightly more context about bias against older adults, often no more than one or two sentences. For example, the Academy of Royal Medical College’s Artificial Intelligence in Healthcare report (2019) states "It might be argued that the level of regulation should be varied according to the risks—for example psychiatric patients, the young and the elderly [sic] might be at particular risk from any ‘bad advice’ from digitised systems" (p. 28).

Ultimately, our overview of these documents demonstrates that ageism directed toward older adults is insufficiently recognized as a specific and unique ethical implication of AI in current literature. To ensure that AI is developed in an ethically defensible manner, such that it promotes equity and rejects unjust bias, this implication ought to be explicitly recognized and addressed. As indicated in previous sections of this Forum article, failing to appropriately involve and accurately represent older people leads to a digital divide that may further contribute to further preventable inequities.

One significant concern about failing to respond to ageism in AI relates to the presence of ageism in AI-powered hiring systems. Consider for example an AI-powered resume-screening tool that excludes job candidates based on their date of graduation. In 2017, AI-driven hiring platforms including Jobr were under investigation for prohibiting applicants from selecting either graduation year or any first job before 1980 (Ajunwa, 2019). Similarly, an algorithm may prioritize young, male applicants to reflect the current employee composition of an organization in an attempt to emulate the employer’s past hiring behavior, and in doing so, perpetuate preexisting biases (Kuei & Mixon, 2020). From an ethical and legal perspective, providing people with a fair opportunity is often considered an important part of what it means to treat people equally and justly (UN, 1945). Failing to provide suitable individuals with the ability to pursue a career opportunity on the basis of immutable characteristics (e.g., graduation year, gender) with no bearing on ability directly opposes the fair equality of opportunity principle.

The widespread use of AI tools to make recommendations with transformative consequences for individuals and society has given rise to an "urgent set of legal questions and concerns" (Presser et al., 2021). These concerns include security, fairness, bias and discrimination, legal personhood, intellectual property, privacy and data protection, and liability for damages (Rodrigues, 2020). There is growing recognition of the need for "normative frameworks for the development and deployment of AI" (Martin-Bariteau & Scassa, 2021). Regulatory governance frameworks are important in preventing and mitigating harm occasioned by the deployment of AI algorithms and can outline the legal recourse available to an aggrieved individual or entity. In the development context, regulatory governance frameworks provide guidance for the ethical development and deployment of AI (including recognizing and minimizing embedded bias).

In recent years, a wave of lawsuits has plagued major employers like Google and LinkedIn who used software algorithms to target internet job advertisements to younger applicants, excluding applicants older than 40 years (Ajunwa, 2018). There have also been multiple lawsuits and settlements based on Facebook’s paid advertisement platform, which enabled advertisers to micro-target ads to exclude users based on protected classes, such as age, which are in violation of federal and state civil rights laws (American Civil Liberties Union, 2019). These discriminatory advertising practices prevented older people from seeing ads for job opportunities, ostensibly denying them the opportunity for employment.

Stakeholders and regulators face unique challenges in AI regulation and governance. There is no uniform global legal code for AI governance. International sources of AI law may be persuasive in other jurisdictions but will not be binding. This means that lawmakers may look internationally for guidance on how other states or countries have navigated the challenges posed by the proliferation of AI, but will ultimately have to develop and implement regulatory systems that accord with their own legal structures. For example, the proposed Canadian Digital Charter Implementation Act (2020) was modeled on the European Union’s General Data Protection Regulation (2016).

Developing laws and regulations regarding technology have global challenges and issues with regard to applications within and across country boundaries. For example, in the Canadian context, governments and regulators must grapple with regulating AI within our federal and constitutional setting (Martin-Bariteau & Scassa, 2021) because powers over health care and human rights are shared between federal and provincial governments. As a result, "[c] oherent, consistent and principled AI regulation in Canada [necessitates] considerable federal-provincial co-operation as well as strong inter agency collaboration—both that may be difficult to count on" (Martin-Bariteau & Scassa, 2021). Beyond jurisdictional issues, governments have sought to balance competing regulatory interests, including the need to protect the public and the need to exercise regulatory restraint as to not stifle innovation (Martin-Bariteau & Scassa, 2021). Adding to this challenge, some AI algorithms are proprietary and thus are afforded intellectual property protections. These intellectual property protections have precluded aggrieved individuals (including criminal defendants) from having access to and examining the AI algorithm (see State v Loomis 881 N.W.2d 749 (Wis. 2016) 754 (US)). AI algorithms behind many social, political, and legal applications of AI have used intellectual property protections to avoid legal and research scrutiny.

Transparency and careful examination for age-related bias (such as through research) is required given the complexity of AI systems, without a deeper investigation we are not able to assess from a legal standpoint whether these systems are perpetuating the ageism that is pervasive in society. Ultimately, the concern is that AI will simply, "reproduce existing hierarchies and vulnerabilities of social relations…" with regard to age and in a manner that avoids scrutiny through obscurity and lack of transparency (Martin-Bariteau & Scassa, 2021). Even with its widespread adoption, there is very little training, support, auditing, or oversight of AI-driven activities from a regulatory or legal perspective (Presser et al., 2021), and Canada’s current AI regulatory regime is lagging (Martin-Bariteau & Scassa, 2021). With the regulation of AI in Canada in its relative infancy, it remains unclear as to whether existing legal frameworks are sufficient to protect or offer any meaningful recourse to those who are victims of ageist bias occurring because of the use of AI.

Looking Ahead

Although much of the discussion about AI and bias has focused on its potential to cause harm, we are optimistic that AI can be developed to mitigate human bias. In the area of employment, for example, new AI-based hiring platforms can help overcome human recruiter bias by detecting qualified candidates who may be overlooked in traditional hiring processes that use resumes and cover letters (Wiggers, 2021). More research developing technologies are also being conducted with older adults (Chu et al., 2021; Harrington et al., 2018), but there is a need for continued analysis of the process to address aspects of ageism (Mannheim et al., 2019). Additionally, mitigating biases in health care is an area of gaining more attention. In this context, the validation of the representativeness of the data set is suggested as the best approach to combat algorithmic bias (Ho et al., 2020). Looking ahead, we remain optimistic that the bias of digital ageism can be acknowledged and addressed through a multifaceted approach. First and foremost, from the lens of critical gerontology, it is crucial to include older adults throughout the pipeline when developing AI systems. This will require addressing structural issues such as access, time, training, and the means to participate in research and development, as well as existing funding constraints of research grants and technology development (Grenier et al., 2021). Next, an interdisciplinary approach that includes gerontologists, social scientists, philosophers, legal scholars, ethicists, clinicians, and technologists who could work collaboratively and lend their expertise to address digital ageism is warranted. An interdisciplinary and critical examination of age as a bias is necessary to capture the full picture for effective AI deployment, especially under the context of the COVID-19 pandemic, where, in some jurisdictions, age was the sole criterion for health care access and lifesaving treatments (WHO & UN, 2021).

There is an urgency and opportunity to better understand and address digital ageism. To date, the AI developed may be insufficient to meet the needs of older adults and may prove to be exclusionary and discriminatory. However, there is also an opportunity to develop programs and mechanisms that include older adults and to delineate what is fair and ethical with regard to AI. This is especially the case given the sociocultural shift where more and more people will, and are expected to, incorporate technology into their lives to remain connected to our technology-enabled world. Projections show that older adults are likely to make up the largest proportion of technology (e.g., health related, information and communication) consumers in the future as today’s tech-savvy adults grow older (Foskey, 2001; Kanstrup & Bygholm, 2019; Rosales & Fernandez-Ardevol, 2019). The COVID-19 pandemic was a significant accelerator of technology use and uptake for day-to-day needs (e.g., online groceries, shopping, health care) and social communication. Such ubiquitous use of technology (De’ et al., 2020) indicates that there is an increased number of people who are likely to be both excluded from these means of communication and affected by implicit biases in current AI systems. Together, these conditions underscore the need for more research on digital ageism.

For future directions, our research team will establish a multiphase research program to further explore the extent of ageism in AI and develop insights about the potential for age-related bias in AI applications that can perpetuate social inequity for older adults. We aim to expand on the described conceptual framework of how older adults experience ageism in and through AI to raise broad awareness of this bias and contribute to a more socially conscious approach to AI development. As the current younger generation may have grown up with widespread access to information and communication technologies like computers, social media, and the internet (referred to as "digital natives" [International Telecommunication Union, 2013; UN]), it is expected that these tech-savvy end-users will have greater expectations for fair and just AI applications as older adults in the future. To meet these future expectations, our interdisciplinary team aims to create data sets with more representations of older adults for fair algorithm development of AI technologies like facial recognition. Furthermore, we will develop partnerships with older adults organizations, governments, AI researchers and developers, and other stakeholders to shape legal and social policy with the aim to reduce technology-driven exclusion and inequities for older adults.

Conclusions

Ageism is a bias that currently remains understudied in AI research. The exclusion of older adults from technology development maintains a broader cycle of injustice including societal ageist attitudes and exacerbates the digital divide. Thus, we urge future AI development and research to consider and include digital ageism as a concept in the research and policy agenda toward building fair and ethical AI.

Funding

The work was led by Dr. C. H. Chu (Principal Investigator) and funded by the Social Sciences and Humanities Research Council in Canada (grant number 00362).

Footnotes

Conflict of Interest

None declared.

References

  1. Abbey R, Hyde S. No country for older people? Age and the digital divide. Journal of Information, Communication and Ethics in Society. 2009;7(4):225–242. doi: 10.1108/14779960911004480. [DOI] [Google Scholar]
  2. Academy of Royal Medical Colleges. Artificial intelligence in healthcare. 2019. https://www.aomrc.org.uk/reports-guidance/artificial-intelligence-in-healthcare/
  3. Ajunwa I. How artificial intelligence can make employment discrimination worse. The Independent. 2018. https://suindependent.com/artificial-intelligence-can-make-employment-discrimination-worse/
  4. Ajunwa I. Beware of automated hiring. The New York Times. 2019. https://www.nytimes.com/2019/10/08/opinion/ai-hiring-discrimination.html .
  5. AlgorithmWatch. AI ethics guidelines global inventory. 2021. https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/
  6. American Civil Liberties Union. Facebook agrees to sweeping reforms to curb discriminatory ad targeting practices. 2019. https://www.aclu.org/press-releases/facebook-agrees-sweeping-reforms-curb-discriminatory-ad-targeting-practices .
  7. Anderson M, Center, A P-P, Research, U . Tech adoption climbs among older adults. Washington, DC: Pew Research Center for Internet & Technology; 2017. https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2017/05/PI_2017.05.17_Older-Americans-Tech_FINAL.pdf . [Google Scholar]
  8. Andrey S, Masoodi MJ, Malli N, Dorkenoo S. Mapping Toronto’s digital divide. 2021. (Issue January). https://brookfieldinstitute.ca/mapping-torontos-digital-divide/
  9. Anguera JA, Gunning FM, Arean PA. Improving late life depression and cognitive control through the use of therapeutic video game technology: A proof-of-concept randomized trial. Depression and Anxiety. 2017;34(6):508–517. doi: 10.1002/da.22588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Angwin J, Kirchner L, Larson J, Mattu S. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica; 2016. [Google Scholar]
  11. Ayalon L, Clemens T-R. In: Contemporary perspectives on ageism. Ayalon L, Tesch-Römer C, editors. Vol. 19. Springer International Publishing; 2018. [DOI] [Google Scholar]
  12. Ball C, Francis J, Huang K-T, Kadylak T, Cotten SR, Rikard RV. The physical–digital divide: Exploring the social gap between digital natives and physical natives. Journal of Applied Gerontology. 2017;38(8):1167–1184. doi: 10.1177/0733464817732518. [DOI] [PubMed] [Google Scholar]
  13. Baum F, Newman L, Biedrzycki K. Vicious cycles: Digital technologies and determinants of health in Australia. Health Promotion International. 2014;29(2):349–360. doi: 10.1093/heapro/das062. [DOI] [PubMed] [Google Scholar]
  14. Binstock RH. The Donald P. Kent memorial lecture. The aged as scapegoat. The Gerontologist. 1983;23(2):136–143. doi: 10.1093/geront/23.2.136. [DOI] [PubMed] [Google Scholar]
  15. Brown P. Artificial intelligence: The fastest moving technology. New York Law Journal. 2020 https://www.law.com/newyorklawjournal/2020/03/09/artificial-intelligence-the-fastest-moving-technology/ [Google Scholar]
  16. Butler RN. Age-ism: Another form of bigotry. The Gerontologist. 1969;9(4):243–246. doi: 10.1093/geront/9.4_part_1.243. [DOI] [PubMed] [Google Scholar]
  17. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science (New York NY) 2017;356(6334):183–186. doi: 10.1126/science.aal4230. [DOI] [PubMed] [Google Scholar]
  18. Charness N, Boot WR. Aging and information technology use: Potential and barriers. Current Directions in Psychological Science. 2009;18(5):253–258. doi: 10.1111/j.1467-8721.2009.01647.x. [DOI] [Google Scholar]
  19. Charness N, Bosman EA. In: Handbook of the psychology of aging. 3rd edition. Birren J, Schaie W, editors. Academic Press; 1990. Human factors and design for older adults; pp. 446–464. [Google Scholar]
  20. Charness N, Bosman EA. In: The handbook of aging and cognition. Craik FIM, Salthouse TA, editors. Erlbaum; 1992. Age and human factors; pp. 495–551. [Google Scholar]
  21. Charness N, Yoon JS, Pham H. The aging consumer. Routledge; 2020. Designing products for older consumers: A human factors perspective; pp. 215–234. [DOI] [Google Scholar]
  22. Chen IY, Szolovits P, Ghassemi M. Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics. 2019;21(2):E167–E179. doi: 10.1001/amajethics.2019.167. [DOI] [PubMed] [Google Scholar]
  23. Chu CH, Biss RK, Cooper L, Quan AML, Matulis H. Exergaming platform for older adults residing in long-term care homes: User-centered design, development, and usability study. JMIR Serious Games. 2021;9(1):e22370. doi: 10.2196/22370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Coiro J. Reading comprehension on the Internet: Expanding our understanding of reading comprehension to encompass new literacies. The Reading Teacher. 2003;56(5):458–464. https://www.jstor.org/stable/20205224. [Google Scholar]
  25. Cotten SR, McCullough BM, Adams RG. Technological influences on social ties across the lifespan S. 2011. https://www.springerpub.com/
  26. Crawford K. The trouble with bias—NIPS 2017 Keynote—Kate Crawford #NIPS2017. The Artificial Intelligence Channel; 2017. https://www.youtube.com/watch?v=fMym_BKWQzk . [Google Scholar]
  27. Cronin B. The digital divide a complex and dynamic phenomenon. 4. Vol. 19. The Information Society; 2003. pp. 315–326. [DOI] [Google Scholar]
  28. Czaja SJ, Boot WR, Charness N, Rogers WA, Sharit J. Improving social support for older adults through technology: Findings from the PRISM randomized controlled trial. The Gerontologist. 2018;58(3):467–477. doi: 10.1093/geront/gnw249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Danks D, London AJ. Algorithmic bias in autonomous systems; IJCAI international joint conference on artificial intelligence; 2017. pp. 4691–4697. [DOI] [Google Scholar]
  30. Dastin J. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters; 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. [Google Scholar]
  31. Datta A, Tschantz MC, Datta A. Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies. 2015;2015(1):92–112. doi: 10.1515/popets-2015-0007. [DOI] [Google Scholar]
  32. De’ R, Pandey N, Pal A. Impact of digital surge during COVID-19 pandemic: A viewpoint on research and practice. International Journal of Information Management. 2020;55:102171. doi: 10.1016/j.ijinfomgt.2020.102171. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Decker V, Valenti M, Montoya V, Sikorskii A, Given CW, Given BA. Maximizing new technologies to treat depression. Issues in Mental Health Nursing. 2019;40(3):200–207. doi: 10.1080/01612840.2018.1527422. [DOI] [PubMed] [Google Scholar]
  34. Díaz M, Johnson I, Lazar A, Piper AM, Gergle D. Addressing age-related bias in sentiment analysis; Proceedings of the 2018 CHI conference on human factors in computing systems; Montreal, Quebec, Canada. 2018. Apr, pp. 1–14. [DOI] [Google Scholar]
  35. Digital Charter Implementation Act. Bill C-11, 2nd Session, 43rd Parliament. 2020 https://parl.ca/DocumentViewer/en/43-2/bill/C-11/first-reading
  36. Foskey R. Technology and older people: Overcoming the great divide. 2001. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.199.6890&rep=rep1&type=pdf
  37. Friedman B, Nissenbaum H. Bias in computer systems. ACM Transactions on Information Systems. 1996;14(3):330–347. [Google Scholar]
  38. General Data Protection Regulation. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119 at art. 3. 2016 https://eur-lex.europa.eu/eli/reg/2016/679/oj
  39. Grenier A, Gontcharov I, Kobayashi K, Burke E. Critical knowledge mobilization: Directions for social gerontology. Canadian Journal on Aging. 2021;40(2):344–353. doi: 10.1017/S0714980820000264. [DOI] [PubMed] [Google Scholar]
  40. Harerimana B, Forchuk C, O’Regan T. The use of technology for mental healthcare delivery among older adults with depressive symptoms: A systematic literature review. International Journal of Mental Health Nursing. 2019;28(3):657–670. doi: 10.1111/inm.12571. [DOI] [PubMed] [Google Scholar]
  41. Harrington CN, Wilcox L, Connelly K, Rogers W, Sanford J. Designing health and fitness apps with older adults: Examining the value of experience-based co-design; Proceedings of the 12th EAI international conference on pervasive computing technologies for healthcare; New York, NY, USA. 2018. May, pp. 15–24. [DOI] [Google Scholar]
  42. Hehman JA, Bugental DB. Responses to patronizing communication and factors that attenuate those responses. Psychology and Aging. 2015;30(3):552–560. doi: 10.1037/pag0000041. [DOI] [PubMed] [Google Scholar]
  43. Hess TM, Auman C, Colcombe SJ, Rahhal TA. The impact of stereotype threat on age differences in memory performance. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences. 2003;58(1):3–11. doi: 10.1093/geronb/58.1.p3. [DOI] [PubMed] [Google Scholar]
  44. Ho C, Martin M, Ratican S, Teneja D, West S. How to mitigate algorithmic bias in healthcare. MedCityNews. 2020. https://medcitynews.com/2020/08/how-to-mitigating-algorithmic-bias-in-healthcare/
  45. Howard A, Borenstein J. The ugly truth about ourselves and our robot creations: The problem of bias and social inequity. Science and Engineering Ethics. 2018;24(5):1521–1536. doi: 10.1007/s11948-017-9975-2. [DOI] [PubMed] [Google Scholar]
  46. Hurling R, Catt M, Boni MD, Fairley BW, Hurst T, Murray P, Richardson A, Sodhi JS. Using internet and mobile phone technology to deliver an automated physical activity program: Randomized controlled trial. Journal of Medical Internet Research. 2007;9(2):e7. doi: 10.2196/jmir.9.2.e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. International Telecommunication Union. Measuring information society report. https://www.itu.int/en/ITU-D/Statistics/Documents/publications/mis2013/MIS2013_without_Annex_4.pdf .
  48. Irvine AB, Gelatt VA, Seeley JR, Macfarlane P, Gau JM. Web-based intervention to promote physical activity by sedentary older adults: Randomized controlled trial. Journal of Medical Internet Research. 2013;15(2):e19. doi: 10.2196/jmir.2158. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Kanstrup AM, Bygholm A. In: Ageing and digital technology: Designing and evaluating emerging technologies for older adults. Neves BB, Vetere F, editors. Springer; 2019. The lady with the roses and other invisible users: Revisiting unused data on nursing home residents in living labs; pp. 17–33. [DOI] [Google Scholar]
  50. König R, Seifert A, Doh M. Internet use among older Europeans: An analysis based on SHARE data. Universal Access in the Information Society. 2018;17:621–633. doi: 10.1007/s10209-018-0609-5. [DOI] [Google Scholar]
  51. Kuei J, Mixon M. Legal risks of using artificial intelligence in hiring. The Center for Association Leadership; 2020. https://www.asaecenter.org/resources/articles/an_plus/2020/may/legal-risks-of-using-artificial-intelligence-in-hiring . [Google Scholar]
  52. Lagacé M, Laplante J, Charmarkeh H, Tanguay A. How ageism contributes to the second-level digital divide. Journal of Technologies and Human Usability. 2015;11(4):1–13. doi: 10.18848/2381-9227/cgp/v11i04/56439. [DOI] [Google Scholar]
  53. Mannheim I, Schwartz E, Xi W, Buttigieg SC, McDonnell-Naughton M, Wouters EJM, van Zaalen Y. Inclusion of older adults in the research and design of digital technology. International Journal of Environmental Research and Public Health. 2019;16(19):3718. doi: 10.3390/ijerph16193718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Margetts H, Dorobantu C. Rethink government with AI. Nature. 2019;568(7751):163–165. doi: 10.1038/d41586-019-01099-5. [DOI] [PubMed] [Google Scholar]
  55. Martin-Bariteau F, Scassa T. Artificial intelligence and the law in Canada. LexisNexis; Canada: 2021. [Google Scholar]
  56. O’Keefe C, Flynn C, Cihon P, Leung J, Garfinkel B, Dafoe A. The windfall clause: Distributing the benefits of AI for the common good; AIES 2020—Proceedings of the AAAI/ACM conference on AI, ethics, and society; New York, NY, USA. 2020. pp. 327–331. [DOI] [Google Scholar]
  57. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science (New York, NY) 2019;366(6464):447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
  58. Petersen LS, Bertelsen P. Equality challenges in the use of eHealth: Selected results from a Danish citizens survey. Studies in Health Technology and Informatics. 2017;245:793–797. doi: 10.3233/978-1-61499-830-3-793. [DOI] [PubMed] [Google Scholar]
  59. Presser J, Beatson J, Chan G. Litigating artificial intelligence. Emond Montgomery Publications Limited; 2021. https://emond.ca/ai21 . [Google Scholar]
  60. Rodrigues R. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology. 2020;4:100005. doi: 10.1016/j.jrt.2020.100005. [DOI] [Google Scholar]
  61. Rosales A, Fernández-Ardèvol M. Structural ageism in big data approaches. Nordicom Review. 2019;40(s1):51–64. doi: 10.2478/nor-2019-0013. [DOI] [Google Scholar]
  62. Rosales A, Fernández-Ardèvol M. Ageism in the era of digital platforms. Convergence (London, England) 2020;26(5-6):1074–1087. doi: 10.1177/1354856520930905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Russell SJ, Norvig P, Davis E. Artificial intelligence: a modern approach. 3rd ed. Upper Saddle River, NJ: Prentice Hall; 2010. [Google Scholar]
  64. Simonite T. The best algorithms still struggle to recognize black faces. Wired; 2019. https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/ [Google Scholar]
  65. Srinuan C, Bohlin E. Understanding the digital divide: A literature survey and ways forward; 22nd European Regional Conference of the International Telecommunications Society (ITS2011); 2011. https://www.econstor.eu/handle/10419/52191 . [Google Scholar]
  66. Tomasino KN, Lattie EG, Ho J, Palac HL, Kaiser SM, Mohr DC. Harnessing peer support in an online intervention for older adults with depression. The American Journal of Geriatric Psychiatry. 2017;25(10):1109–1119. doi: 10.1016/j.jagp.2017.04.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. United Nations. Charter of the United Nations. 1945. https://www.un.org/en/about-us/un-charter/full-text .
  68. Vervaecke D, Meisner BA. Caremongering and assumptions of need: The spread of compassionate ageism during COVID-19. The Gerontologist. 2021;61(2):159–165. doi: 10.1093/geront/gnaa131. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. White H, McConnell E, Clipp E, Branch LG, Sloane R, Pieper C, Box TL. A randomized controlled trial of the psychosocial impact of providing internet training and access to older adults. Aging & Mental Health. 2002;6(3):213–221. doi: 10.1080/13607860220142422. [DOI] [PubMed] [Google Scholar]
  70. Whittlestone J, Nyrup R, Alexandrova A, Dihal K, Cave S. Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. 2019. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf .
  71. Wiggers K. Plum uses AI to hire people ‘that never would have been discovered through a traditional hiring process.’. VentureBeat; 2018. https://venturebeat.com/2018/06/13/plum-uses-ai-to-hire-people-that-never-would-have-been-discovered-through-a-traditional-hiring-process/ [Google Scholar]
  72. Wilkinson J, Ferraro KF. In: Ageism. Nelson TD, editor. The MIT Press; 2002. Thirty years of ageism research. [DOI] [Google Scholar]
  73. World Health Organization & United Nations. Ageism is a global challenge: UN. 2021. https://www.who.int/news/item/18-03-2021-ageism-is-a-global-challenge-un .
  74. Zhavoronkov A, Mamoshina P, Vanhaelen Q, Scheibye-Knudsen M, Moskalev A, Aliper A. Artificial intelligence for aging and longevity research: Recent advances and perspectives. Ageing Research Reviews. 2019;49:49–66. doi: 10.1016/j.arr.2018.11.003. [DOI] [PubMed] [Google Scholar]

RESOURCES