Skip to main content
Springer logoLink to Springer
. 2022 Oct 27;28(6):49. doi: 10.1007/s11948-022-00397-y

Equity in AgeTech for Ageing Well in Technology-Driven Places: The Role of Social Determinants in Designing AI-based Assistive Technologies

Giovanni Rubeis 1,, Mei Lan Fang 1,2, Andrew Sixsmith 1,3
PMCID: PMC9613787  PMID: 36301408

Abstract

AgeTech involves the use of emerging technologies to support the health, well-being and independent living of older adults. In this paper we focus on how AgeTech based on artificial intelligence (AI) may better support older adults to remain in their own living environment for longer, provide social connectedness, support wellbeing and mental health, and enable social participation. In order to assess and better understand the positive as well as negative outcomes of AI-based AgeTech, a critical analysis of ethical design, digital equity, and policy pathways is required. A crucial question is how AI-based AgeTech may drive practical, equitable, and inclusive multilevel solutions to support healthy, active ageing.

In our paper, we aim to show that a focus on equity is key for AI-based AgeTech if it is to realize its full potential. We propose that equity should not just be an extra benefit or minimum requirement, but the explicit aim of designing AI-based health tech. This means that social determinants that affect the use of or access to these technologies have to be addressed. We will explore how complexity management as a crucial element of AI-based AgeTech may potentially create and exacerbate social inequities by marginalising or ignoring social determinants. We identify bias, standardization, and access as main ethical issues in this context and subsequently, make recommendations as to how inequities that stem form AI-based AgeTech can be addressed.

Introduction

AgeTech refers to the use of technologies and services to support aging. The new generation of AgeTech encompasses emerging and advanced technologies in areas such as artificial intelligence (AI), robotics, machine learning, e-health, and mobile technologies to support the health, independence and well-being of older people (Sixsmith, 2021). AgeTech has been explored in terms of supporting older people to remain at home for longer (Verloo et al., 2020), to provide social connectedness (Baez et al., 2019), support wellbeing (Astell et al., 2016) and mental health (Andrews et al., 2019) and connect the older person to their wider community (Fleming et al., 2018). However, the use of technology to provide support for older people to age well in place may also bring with it the potential to increase inequalities in access to health and health outcomes for vulnerable and marginalised people. This is especially the case when the needs and resources of individuals as well as groups are not fully acknowledged, when their characteristics are reduced to oversimplified or stereotypical narratives, and when barriers to equitable access exist. Thus, technology design and its functional requirements necessitate deeper insight into the range of diverse factors that may shape its context of use.

Research has indicated a crucial need to fully understand who and in what ways emerging and advanced technology interventions such as AI can have positive benefits for older people, or further exacerbate experiences of marginalisation (Sixsmith, 2006). A critical discussion of ethical design, digital equity, and policy pathways is required if we are to fully understand the positive and negative intended and unintended consequences of AI as an AgeTech solution to drive practical, equitable, and inclusive multilevel solutions to support healthy, active ageing. Regulations and the accompanying policies are without a doubt crucial instruments when it comes to avert harm from vulnerable groups, safeguard their civil rights, and avoid marginalisation. However, apart from the rather passive or reactive view that regulations and policies should act as a protective shield, a more active approach could be taken in regard of technology design and implementation. In order for AI-based AgeTech to realise its full potential, the focus on equity is key. Equity – as it pertains to the changing digital landscape – is defined in this paper as fairness and equality of access and use of AgeTech regardless of the myriad social characteristics which one holds, that combined, may create social disadvantages. As a mechanism to help to ensure equity by this definition, the threshold of having meaningfully addressed AgeTech can be achieved by (Fang, 2018; Fang et al., 2019): (1) assessing the intended and unintended positive and negative consequences of AgeTech; (2) considering who is most likely to benefit from AgeTech; and (3) responding to the challenges of those who experience the most significant barriers to access and use.

To help to ensure AI-based AgeTech to truly support an independent lifestyle and improve the quality of life of older adults, these technologies need to be designed in a way that enables agency as well as social participation. Therefore, equitable access has to be a key feature of AI-based AgeTech. The precondition for this is to acknowledge the diversity of older adults and to tailor technologies to their multifaceted needs and resources. Equity should thus be an explicit aim of designing AI-based AgeTech, and not just seen as an extra benefit or minimum requirement. This approach does not only aim at protecting older adults against the possible negative effects of AI-based AgeTech. It actively addresses the social determinants of health as defined by the WHO as non-medical factors that influence health outcomes by shaping the daily life and social situation of a person (WHO, 2010) (such as age, ethnicity, gender, socioeconomic status, and level of education) which may create experiences of marginalisation. We will explore how a particular facet of AI systems - complexity management - can potentially create and exacerbate social inequities, and subsequently, make recommendations as to how inequities that stem form AI-based AgeTech can be addressed. Although clinical as well as legal issues may also arise in this context, our specific focus is on the primarily ethical aspects connected to complexity management.

Each of the aformentioned social determinants can be a contributing factor for marginalisation, yet in reality, we seldom see these determinants as sole factors to shape inequity. More often, they are mutually dependent. For example, lower education and belonging to a social minority often correlate with lower socioeconomic status, which in turn, can lead to poorer health outcomes in old age. Fang et al., (2019) have identified this as a “wicked” problem, in view of three key principles of intersectionality (Hankivsky, 2014):

1) Single traits such as age, gender, or ethnicity are insufficient when it comes to understanding individual experiences – as reducing individuals to a single trait results in the oversimplification of their lived accounts.

2) Social determinants are not objective or fixed categories, but are seen as fluid and flexible social constructs that vary depending on the contexts of time and place and are shaped social processes, structures, and power relations.

3) Social justice and equity are not merely add-ons, but have to be considered as crucial aspects when it comes to policy-making.

Complexity, Complexity Management and AI

A main ethical issue when it comes to AI-based AgeTech in the light of interconnected social determinants is complexity management. AI-technologies are built to detect patterns in large data sets and derive algorithms for predicting future events or controlling processes. In AgeTech, AI-technologies face the inherent complexity and unpredictability of everyday behaviors, situations, and contexts. For AI systems to operate within the current technological limitations, it is essential that complexity is managed, which often means reducing complex aspects to simple factors. Complexity management thus means to align the data with a given framework for processing it, which often means to standardise, decontextualise, and quantify data. This can become a problem when qualitative and context-rich data is reduced to manageable data formats, thus losing some of its crucial features. This is not merely a technical, but primarily a social and ethical issue, since it affects the way we are dealing with diversity in society and the specific needs and resources of individuals. Complexity in this context however does not only refer to different contexts of use, but also to the variety of user characteristics. Here lies the connection between complexity and vulnerability: the diversity of users regarding the aforementioned social determinants causes complexity, which in turn poses a problem that AI-based systems try to resolve by reducing complexity. Therefore, AI-based AgeTech aimed to support the personalisation of services may inadvertently do the exact opposite and thus posing a risk to vulnerable groups.

In order to analyse ethical issues connected to complexity management, we suggest examining the way social determinants are represented in the data that is used, how social determinants are taken into consideration when processing data, and how social determinants may shape the use of technologies that operationalise the data. Therefore, we identify three dimensions of ethical issues: The inappropriate representation of social determinants may cause flawed or stereotypical concepts od user characteristics or bias. The context-insensitive procession of data, i.e. an oversimplifying or reductionist approach, may lead to standardisation of user characteristics. Ignoring social determinants in technology use may negatively affect access to these AI-based AgeTech.

Complexity management thus has three crucial aspects which we explore in more detail in the following sections and, subsequently, recommend strategies for dealing with the issues at hand:

  • Bias- Bias in relation to certain social characteristics may be introduced into AI systems at different stages in the development and innovation process, such as in the AI training data that is used or decisions made regarding the commercialisation and marketing of products and services.

  • Standardization- the problematic aspect here is the underlying assumption that objective parameters can be defined and operationalised to identify and respond to ambiguous behaviors and situations.

  • Access- Some AI-based technologies are designed in a way which makes them difficult to use by some individuals and groups.

Complexity Management and Bias

Bias is one of the most fiercely debated issues in AI-based technologies (Challen et al., 2019; Mittelstadt & Floridi, 2016; Neven, 2015; Safdar et al., 2020; Wong, 2020). Bias can be a result of complexity management in two ways. Firstly, algorithm-based systems strongly rely on training data that has been used in their development. These systems learn by processing large amounts of this training data, detecting connections and patterns, and inferring general rules. The quality of the training data thus determines the quality of the algorithm-based system. The training data is usually taken from large cohort studies (Jones et al., 2018), in which certain social groups are often notoriously underrepresented. As a consequence, the bias within the training data is transferred into the system, leading to a biased algorithm. Thus, the needs and resources of various social groups are not considered by the system.

Secondly, AI-based systems and hardware devices are produced on a certain scale in order to be cost-efficient. In order to make the product suitable for a certain mass of users, complexity management regarding concepts of old age is necessary. Further, by way of complexity management, the risk of so-called age scripts arises, consequently, stereotypical concepts of what it means to be older aged, regarding needs and resources of older adults, are inscribed into the technology.

Training Data

AI-systems learn by inferring algorithms from data sets (Jones et al., 2018). In order to operationalise this, the systems are fed with training data, mostly taken from large cohort studies. The more good quality training data a system is fed with, the more accurate its algorithms become. That also means that the quality of the algorithm, its accuracy and discriminatory power, depends largely on the quality of the training data, and lack thereof can lead to bias. A bias within the training data is often perpetuated in the system’s algorithm, as bias and complexity management often go hand in hand. This is largely due to the fact that prior to processing for analysis, the data is decontextualised, meaning as part of data preparation, the data becomes separated from its specific spatial, temporal, or social determinants. Racial bias is a prominent example for the connection between complexity management in the shape of decontextualization and bias. Obermeyer et al. (2019) have demonstrated this connection in a striking way upon analysing an algorithm used within the US health services. The task of this algorithm was to identify people with complex health needs. The algorithm assessed individual health costs and predicts future costs. It followed the basic assumption that individuals where more health costs have been invested in the past, have higher health needs and should therefore have easier access to health services. As a result of challenges that stem from socio-structural, and historical issues, when less health spending are applied in certain groups (i.e., African-American people), the algorithm assigns them a lower risk-score. This example demonstrates that the inherent technological need to reduce complexity in order to process data efficiently may lead to bias and thus conflict with the basic goal of providing enhanced person-centred health care. The result is a harmful feedback loop whereby health disparities manifest themselves in training data that do not represent social realities. Biased algorithms and predictive models are subsequently built based on these training data which can in turn aggravate the structural discrimination already inherent in the healthcare system (Walsh et al., 2020).

Age Scripts

AgeTech relies on specific concepts of age, a narrative that defines characteristics of older adults as well as what age means. These age scripts are written into AgeTech and define the scope and purpose of the technology (Peine et al., 2015). Age scripts can be developed based on various sources such as societal views, design traditions, or individual interpretations (Peine & Neven, 2021). This implies a certain risk for stereotypical or poorly-informed views about older adults, their needs and resources, and their desired way of life becoming the basic design framework of AgeTech. Usually, such ageist stereotypes present older adults as ‘problem focused’ which assumes older people as a homogeneous group with associated traits such as frail, vulnerable, and in need of help (Ayalon & Tesch-Romer, 2018). The notion of age is fundamentally viewed as a problem to be solved by AgeTech (Rubeis, 2020). Although enabling agency is the intended goal, limitations arise when fundamental concepts and purposes of AgeTech are not determined with the end users (Neven, 2015; Peine et al., 2015). The result is the emergence of a sense of ‘benevolent paternalism’ that predefines the appropriate lifestyle of older adults without including them in the design process (Manzeschke et al., 2016). This is especially the case when digital equity was not considered as part of the intended goal.

Focusing on equity may help to diversify training data as well as the basic scripts that are written into the technologies. It is especially important to acknowledge that older adults do not constitute one homogenous group, but a diverse population who are shaped by a complex interplay of social determinants.

Complexity Management and Standardisation

Standardisation means that certain parameters must be predefined for AI based systems to process. For example, a crucial parameter in some systems for fall detection is gait pattern (Piloto et al., 2018). These particular systems measure the typical gait pattern of a person, and whenever the gait pattern changes and/or there is a deviation from the standard, the system interprets this as a sign for an impending fall. The system may then trigger an alarm and inform caregivers. The issue here is not so much the definition of a standard as such since it is defined on the basis of a person’s individual health data. What is problematic, is the fact that specific parameters are defined as objective indicators of health and well-being. This suggests that primarily qualitative categories like well-being can be easily quantified by choosing the right parameters. This is an oversimplification that ignores the rich contexts of a person’s attitude towards well-being, health, and others, which forces them to adapt to pre-fixed definitions of well-being.

In the fall detection example and as seen in other examples, this may not seem like an issue since it is quite reasonable to define gait pattern as a parameter for assessing the fall risk. Another example is systems that recognise whether shutters are open or closed, and if closed, how long they remain closed during daytime. In this case, systems that draw conclusions based on the mental health status of a person because closed shutters throughout the day may imply a mood change, or even a depressive episode. It becomes clear that in this case, complexity management (closed shutters equals depression) can lead to a standardisation of behaviour.

Standards and standardised parameters play a key role in all data-driven technologies. The scope and characteristics of data that are deemed as useful or viable depend on the parameters for measuring them. In the context of digital health, the main parameters are digital biomarkers. Biomarkers are biomedical or behavior-related indicators used to measure, predict, or evaluate health-related outcomes (Guthrie et al., 2019; Sim, 2019). Raw data as detected by sensors are of no use for health care professionals without predefined biomarkers that allow for scaled and contextualised data. Digital biomarkers are therefore crucial for personalised interventions such as AgeTech.

In AgeTech, digital biomarkers may be used in sensor technologies, either in smart home sensors or smart wearables, in order to measure and predict changes in symptoms and behavior. Digital biomarkers have to be defined, meaning that a certain trait has to be singled out and deemed as a viable indicator. This may not be problematic in a biomedical context, e.g., determining blood pressure as biomarker for cardiovascular health. However, even in this context, blood pressure would first and foremost be seen as one factor amongst others with which it interacts and thus constitutes cardiac health. Regarding individual behavior, it is even more difficult to define clear-cut biomarkers. One example is a framework for recognising and regulating emotions in older adults presented by Castillo et al. (2014). The framework consists of sensors that measure physiological signals, facial expression, and voice in order to determine the emotional status of a person. A social robot reacts to these emotions and the color, lighting, and music in the room is adapted in order to enable positive emotions. In this setting, certain facial expressions for example are defined as indicators for a certain emotional state. A certain colour scheme or music is defined as a means of regulating emotions. The underlying assumption is that clear-cut parameters can be defined for distinguishing between positive and negative emotions as well as for regulating the negative ones. This blurs the line between support and conditioning, i.e. the regulation of a person’s conduct. Conditioning signifies an action directed at creating a specific behavior that is deemed as appropriate or desirable. For example, some stakeholders may have a personal interest in reducing costs of health care services. Thus, instead of modifying health care services to address the specific needs of individuals, methods of conditioning could be applied in order to shape the individual to fit the most cost-effective service. Taking the aforementioned example, the use of technology for regulating emotion may be understood as a way to achieve a certain behavioural standard which offers cost-effectiveness as opposed to providing person-centred care and tailored services. The same method could be applied to condition people to eat healthy or exercise in order for cost-saving purposes.

Conditioning has already been recognised as a risk in AgeTech (Hummel & Braun, 2020; Manzeschke et al., 2016; Mortenson et al., 2015; Petrakaki et al., 2018; Rubeis, 2020). What is seldom discussed is the connection between epistemological and ethical aspects in this context (Morley & Floridi, 2020). According to this view, the problem resides in the underlying assumption that objective parameters can be defined for appropriate or desirable emotions and behavior. There is a certain practical and ethical risk of creating parameters deliberatively designed in order to produce a specific behavior – a type of nudging (Thanler & Sunstein, 2008) that contradicts to the goal of person-centred care. An equity-focused approach could minimise the risks of standardization by taking the diversity of user preferences, needs, and resources into account, thus providing a more balanced and person-centered baseline for technology design.

Complexity Management and Access

Another ethical quandary of complexity management is that it manifests itself by way of optimal AI design with limited consideration for issues relating to access i.e., lack of cultural appropriateness. “Culture” in this context may refer to certain codes, concepts, and behaviors that stem from the ethnic identity of older adults. It may, however, also be understood in terms of the life experience of certain generations. As such, the way in which individuals have been exposed to technology across their lifespan influences their attitudes and behaviour towards AgeTech in old age. Technology that is designed without consideration for cultural appropriateness can mean that they are more difficult to access resulting in reduced uptake.

Aligned with the issue of access is usability, and links to cultural appropriateness. For example, digital literacy, more precise the lack of it, can be an access barrier when it comes to AI-based AgeTech. This is especially the case with technologies that require direct interaction, like human-machine interfaces (HCI) or technologies for self-management and self-monitoring common in mHealth and telehealth (Fang et al., 2018). The lack of experience with digital technologies or affinity towards their use may thus prevent older adults from accessing them (Fang et al., 2019). Empirical evidence shows that digital literacy and health outcomes correlate with education, age, gender, and socioeconomic status (Ang et al., 2021).

The need for complexity management in AI-based AgeTech also affects access to AgeTech. One important factor in this regard is diversity within the older population (Fang et al., 2018; Haufe et al., 2019). Older adults vary not only in health status, ethnicity, and socioeconomic status, but also in health literacy and affinity to technology. These factors often interact and create a dynamic that may prevent access to AgeTech. As a result, the most complex outcomes of digital health disparities arise across social intersections, e.g., between age, gender, ethnicity, and socioeconomic status (Fang et al., 2019). Studies show that people over 65 have generally less desire and intent to use ICTs (Fang et al., 2018 b). Especially older adults with less exposure to technology throughout their work life show a lower uptake of digital health technologies (Fang et al., 2019).

Other reasons that impact technology use are health-related. Co-morbidities and functional disabilities, e.g., vision impairment or difficulties in memorising passwords, are barriers to digital access in this regard (Chen & Chan, 2013; Fang et al., 2018). This is especially the case when interfaces and systems lack the appropriate level of user-friendliness (Haufe et al., 2019).

AgeTech access and uptake is further complicated when gender is added i.e., as another layer of social complexity. In general, women use ICTs less than men, but more often for health purposes than men. Education, socioeconomic status and age are associated in this regard, since younger, more educated women are more likely to use ehealth than older, less educated men with low incomes (Fang et al., 2018). Last, individual beliefs, attitudes, and fears of technology are associated with age and generation and often play a role in preventing older adults from using AgeTech (Chen & Chan, 2013; Fang et al., 2018; Haufe et al., 2019). Older adults often consider themselves as incapable to handle the technologies due to the aforementioned factors and their perceived age or lack of digital competencies. As well, internalised-stigma including negative self-perceptions of being frail and in need of help when using AgeTech hinders technology use (Haufe et al., 2019). Such beliefs and attitudes may be the result of lack of experience with technologies, educational status, with the root of such perceptions shaped by intersectional social factors such as ethnicity, gender, and socioeconomic status.

As a consequence, the interplay of all of these technology deterrents creates a situation where individuals who can benefit the most from AgeTech are the same that people who experience the greatest access barriers (Fang et al., 2018a). A key explanation for this may be that the combined effect of social determinants is overlooked when designing AgeTech. Thus, a focus on equity already in the design process is crucial.

Discussion and Recommendations

Although complexity management as an explicit concept has not been widely discussed so far, several strategies have been suggested for dealing with its outcomes. We will provide a short overview of these strategies, based on the main currents of the ethics of AI-based health technologies in the research literature. These strategies are often labelled as democratisation of AI-based health technologies and consist of at least three objectives (1) diversifying training data, models, and algorithms, (2) engaging relevant stakeholders as well as communities in design and implementation of AI-based health technologies, and (3) granting access to medical to all social groups and on a global scale.

Diversifying Training Data, Models, and Algorithms (1)

Discrimination and bias in AI-based AgeTech may be a result of ignoring social determinants when selecting training data or designing models and algorithms. This issue is mainly discussed under the header of algorithmic fairness (Wawira Gichoya et al., 2021). One way to address this issue is to question existing proxies and biomarkers and to include social determinants (Walsh et al., 2020) – meaning that the socio-demographic contexts in which data has been collected must be considered. Furthermore, various social determinants should be actively integrated into the data procession process. This means that when selecting data sets for training, there needs to be assurance that the data does not focus exclusively on one social group. The epistemic scope and limits have to be evaluated based on the quality of the data in this respect (McCradden et al., 2020). If an algorithm performs poorly for specific groups, additional data from these groups should be collected and induced into the machine learning process (Walsh et al., 2020). This implies a process of model auditing throughout the design process that focusses on the reliability and validity of models as well as the assessment of confounding errors (McCradden et al., 2020). Also, upon implementation of these technologies, a local evaluation should be conducted in order to investigate hidden stratification effects (McCradden et al., 2020). A key takeaway message is that fairness should not be an afterthought or post-hoc consideration (Wawira Gichoya et al., 2021). Rather, fairness in terms of diversifying training data, models, and algorithms should be operationalised through model reporting guidelines, clinical trial guidelines, and regulatory approaches. This is an issue for AI generally and is not limited to older populations. However, the complex intersectionality of factors contributing to aging, the prevalence of ageist attitudes, and indeed increasing heterogeneity in later life create additional challenges.

Engaging Relevant Stakeholders and Communities (2)

Participatory methods for designing AgeTech are widely seen as ways to prevent discrimination and enable equity (López Gómez & Criado, 2021). Engaging relevant stakeholders and communities may be a crucial measure throughout the life cycle of an AI-based AgeTech product. At the research stage, the perspectives of diverse end users may be required for inclusivity training data as well as user-centered technologies (Fohner et al., 2019). By using a community-engaged approach, educational aspects as well as shared decision-making are emphasised, and mutual benefit may be achieved. This approach may also foster trust in technologies, which is a crucial aspect as it pertains to the acceptability of AgeTech (Walsh et al., 2020). Using a community-engaged approach may also help to situate AI-based AgeTech in view of context-specific healthcare infrastructures and communities. Consequently, technologies tailored for the specific requirements of a given context as opposed to a one-size-fits-all-approach can mean that caregivers and care receivers are potentially more likely to use them. (Fohner et al., 2019). However, this approach has some drawbacks. Engaging stakeholders and communities can be a challenging and often very time-consuming approach, which also means more cost-intensive. Furthermore, participatory processes need a moderation in order to integrate and balance different expectations (Merkel & Kucharski, 2019). As with AgeTech itself, there is also no one-size-fits-all-approach regarding stakeholder and community engagement. In order to choose the right approach or method, it has to be clarified why a stakeholder- and community-engaging approach should be applied, what individuals or groups as well as future users will be included and at what stage of the lifecycle of the product they will be involved (Merkel & Kucharski, 2019). Another difficulty here is how to best involve older adults who are isolated, hard to reach, and who may be non-tech users in the co-development process.

Use of Personas for User-Centered Design (2)

Developing a ‘persona’ and ‘scenario’ – a technique often used in technology design to ensure that the tech product is conducive to the end-user. A ‘persona’ is a description of a fictitious individual based on data or information from real people (Adlin & Pruitt, 2010), while scenarios provide context of the persona, which include stories of personal experience – a setting or situation in detail which a person performs a sequence of actions (possibly involving other people) to produce an outcome. In technology design, developing personas and scenarios can be viewed as a tool for considering how products can be delivered in an ethically, socially-responsible and culturally-sensitive way. The use of personas and scenarios are intended to promote our empathy with the people who we aim to serve (Jackson & Hwang, 2020). While each of us come from different backgrounds, personas can help tap into our innate human tendency to generate detailed and complex models of people and their behaviours, even when those people are fictitious (Jackson & Hwang, 2020). This is demonstrated by the ways in which we naturally try to relate to or develop understandings about fictional characters in stories or films. Using personas in teaching can help tap into this natural human aptitude that we all possess (Jackson & Hwang, 2020).

Ethical and Responsible AgeTech (3)

Previous research has also established that factors influencing digital marginalisation are multi-faceted and that an intersectional approach needs to be adopted to fully understand how people can be disadvantaged by the increasing reliance on AgeTech (Fang et al., 2019); and in the context of AI, identify and understand if and in what ways this technology can be created with the intent to ameliorate traditional forms of marginalisation (and for whom). There are various policy pathways considered to help to ensure AgeTech access, equity and other social determinant barriers. Stix’s (2021, p. 15) notion of actionable principles for AI policy highlights how “successful actionability in policy” requires going beyond AI-based AgeTech ethics principles as a reference point, and this can be achieved, in part, by referencing transdisciplinary theoretical perspectives from disciplines of gender studies (i.e., intersectionality), urban studies (i.e., sense of place), and health sciences (i.e., digital social determinants of health). Critical analysis of existing AI design and policy by viewing the problem area through diverse theoretical viewpoints can help shed light on how we can better understand and respond to the various ways in which social positioning create distinct, and often multiple, barriers for various subgroups.

According, the three AI policy pathways presented by Stix (2021): (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalisability can be supported by integrating equity-driven theoretical models and frameworks such as the Social Justice Framework for Bridging the Digital Divide, Health Equity Impact Assessment (HEIA) tool, and the Intersectional Place Perspective for AgeTech solutions (Fang et al., 2019, 2020; Ontario Ministry of Health and Long-Term Care & Toronto Central LHIN, 2009).

HEIA originated from the Health Impact Assessment methodology and has been heavily used worldwide over the past decade as a decision-making tool to facilitate the development of healthy public policy (Ontario Ministry of Health and Long-Term Care & Toronto Central LHIN, 2009). HEIA can be applied to describe the individuals and groups most affected by the progression of AI in healthcare, characterise the ways in which possible inequities can occur to support the responsible design of AI interventions and initiatives to optimise equitable access, positive ageing outcomes whilst identifying factors that may unintentionally exacerbate experiences of vulnerability and disadvantage.

The Social Justice Framework for Bridging the Digital Divide framework stems from a realist review and affirms that individuals exist within structures and systems designed by and for persons in more advantageous social positions, which creates modes of differentiation across groups and divisive access to digital resources (Fang et al., 2019). This framework may be particularly useful for recognizing and responding to the multiple layers of access and use inequities that older people might experience (Sixsmith et al., 2019), when introduced to AI interventions.

Last, the Intersectional Place Perspective designed to identify individual, social and place-based factors that shape opportunity and oppression, has been used to better understand the combined effects of the digital determinants of health and wellbeing (Fang et al., 2020, 2021). This theoretical model can help to characterise the ways in which AI can compound or alleviate inequity, through consideration of socio-cultural and environmental contextual factors that shape lifetime health and wellbeing outcomes of older people – recognising the importance of intersectionality and place. Overall, such equity-driven resources can support AI developers and shape policy pathways by providing a spotlight on how digital social determinants are operationalised in real life scenarios, which can help to ameliorate inequities associated with AI design and rollout how these might be produced and utilised across different contexts, at scale and in an ethical and sustainable way.

Granting Access (3)

As we have seen, access barriers may arise when AI-based AgeTech does not fit the needs and resources of users. One strategy for fostering access is universal design, sometimes referred to as inclusive design or design for all (Ma et al., 2021). The aim is to reduce access barriers by designing technologies that are simple and intuitive and allow flexible use with a certain tolerance for error. This requires a well-informed design process that includes user experiences from various user groups. Another factor preventing certain user groups from access is contextual bias (Weissglass, 2021). Most AI-based AgeTech is designed in high-income countries and adapted to the structures as well as contexts of use within their respective health systems. The resulting technologies might not be fitting for the systemic or institutional conditions in low-income countries. Disclosure and validation policies might be instruments for ensuring that the context of technology design is made transparent (Weissglass, 2021). However, also in this regard, a more inclusive design process that takes diversity on a global scale into account would be needed. Another approach is to make algorithms as well as data-bases consisting of training data accessible on a global scale. This would provide researchers, clinicians, and developers from lower-income countries the opportunity to adapt technologies to their own respective contexts of use. Finally, empowering e-health literacy of older adults may reduce the digital gap, which is one of the main access barriers (Seifert et al., 2019). E-Health literacy could be improved by providing learning tools, e.g., through existing educational services. This would mean to acknowledge that older adults are not inherently technology-adverse and have both the resources and the willingness to learn, which is often ignored due to stereotypes about older adults.

Conclusion

AI-based AgeTech has the potential to support older adults towards living a more active, independent, and healthier life. In facilitating more personalised healthcare services, AgeTech may potentially be used not only for the purpose of dealing with deficits, but rather to improve the quality of life of users. This potential, however, can only be realised in full if equity is seen as major principle for the design, implementation, and use of AI-based AgeTech. Equity is neither an add-on nor can it be an after- the- fact consideration. Rather, equity should be the guiding principle in developing AI-based AgeTech at the outset and should also guide its implementation as well as the practices it enables. This requires a well-informed design process that takes the combined effect of social determinants into account, provides accompanying measures for educating users as well as caregivers, and defines regulatory approaches that address the issues of bias and discrimination.

Funding

No funding was received to assist with the preparation of this manuscript.

Open access funding provided by Karl Landsteiner Privatuniversität für Gesundheitswissenschaften

Competing Interests

The authors have no relevant financial or non-financial interests to disclose.

Footnotes

The original online version of this article was revised: In this article the author name Mei Lan Fang was incorrectly written as Mei Lang Fang.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

12/13/2022

A Correction to this paper has been published: 10.1007/s11948-022-00424-y

Contributor Information

Giovanni Rubeis, Email: giovanni.rubeis@kl.ac.at.

Mei Lan Fang, Email: m.l.fang@dundee.ac.uk.

Andrew Sixsmith, Email: andrew_sixsmith@sfu.ca.

References

  1. Adlin, T., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Morgan Kaufman
  2. Andrews, J. A., Brown, L. J., Hawley, M. S., & Astell, A. J. (2019). Older adults’ perspectives on using digital technology to maintain good mental health: interactive group study. Journal of Medical Internet Research, 21(2), e11694 [DOI] [PMC free article] [PubMed]
  3. Ang S, Lim E, Malhotra R. Health-related difficulty in internet use among older adults: Correlates and mediation of its association with quality of life through social support networks. The Gerontologist. 2021;61(5):693–702. doi: 10.1093/geront/gnaa096. [DOI] [PubMed] [Google Scholar]
  4. Astell, A. J., Hwang, F., Williams, E. A., Archer, L., Harney-Levine, S., Wright, D., & Ellis, M. (2016). Well-being and HCI in later life-what matters? International conference on human interface and the management of information
  5. Ayalon, L., & Tesch-Römer, C. (2018). Introduction to the section: Ageism—Concept and origins. In L. Ayalon, & C. Tesch-Römer (Eds.), Contemporary perspectives on ageism (pp. 1–10). Springer
  6. Baez, M., Nielek, R., Casati, F., & Wierzbicki, A. (2019). Technologies for promoting social participation in later life. Ageing and Digital Technology (pp. 285–306). Springer
  7. Challen, R., Denny, J., Pitt, M., et al. (2019) Artificial intelligence, bias and clinical safety. BMJ Quality & Safety 28(3), 231–237. [DOI] [PMC free article] [PubMed]
  8. Chen K, Chan AH. Use or Non-Use of Gerontechnology—A Qualitative Study. International Journal of Environmental Research and Public Health. 2013;10:4645–4666. doi: 10.3390/ijerph10104645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Fang ML, Siden E, Korol A, Demestihas MA, Sixsmith J, Sixsmith A. A scoping review exploration of the intended and unintended consequences of eHealth on older people: A health equity impact assessment. Human technology. 2018;14(3):297–323. doi: 10.17011/ht/urn.201811224835. [DOI] [Google Scholar]
  10. Fang, M. L., Canham, S. L. Battersby, L., Sixsmith, J., Wada, M., & Sixsmith, A. (2019). Exploring privilege in the digital divide: Implications for theory, policy, and practice. Gerontologist. 10.1093/geront/gny037 [DOI] [PubMed]
  11. Fang, M. L., Sixsmith, J., & Woolrych, R. (2020). An intersectional place perspective for AgeTech Research, policy and practice. Gerontechnology, 19
  12. Fang, M. L., Wong, K., Remund, L., Sixsmith, J., & Sixsmith, A. (2021). Technology access is a human right! Illuminating intersectional, digital determinants of health to enable agency in a digitized era. In Technology, mind & society 2021 conference proceedings American Psychological Association
  13. Fleming A, Mason C, Paxton G. Discourses of technology, ageing and participation. Palgrave Communications. 2018;4(1):1–9. doi: 10.1057/s41599-018-0107-7. [DOI] [Google Scholar]
  14. Fohner, A. E., Volk, K. G., Woodahl, E. L. (2019). Democratizing precision medicine through community engagement. Clinical Pharmacology & Therapeutics 106(3), 488–490. 10.1002/cpt.1508 [DOI] [PMC free article] [PubMed]
  15. Guthrie NL, Carpenter J, Edwards KL, Appelbaum KJ, Dey S, Eisenberg DM, Katz DL, et al. Emergence of digital biomarkers to predict and modify treatment efficacy: machine learning study. British Medical Journal Open. 2019;9(7):e030710. doi: 10.1136/bmjopen-2019-030710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hankivsky, O. (2014). Intersectionality 101.The Institute for Intersectionality Research & Policy, SFU 36.
  17. Haufe M, Peek STM, Luijkx KG. Matching gerontechnologies to independent-living seniors’ individual needs: development of the GTM tool. BMC Health Service Research. 2019;19(1):26. doi: 10.1186/s12913-018-3848-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hummel P, Braun M. Just data? Solidarity and justice in data-driven medicine. Life Sciences Society and Policy. 2020;16(1):8. doi: 10.1186/s40504-020-00101-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Jackson, P., & Hwang, A. (2020). Design tools: Transforming ideas into products. In A. Sixsmith, J. Sixsmith, A. Mihailidis, & M. L. Fang (Eds.), Knowledge, innovation, and impact in health: A guide for the engaged researcher (pp. 211–220). Springer
  20. Jones LD, Golan D, Hanna SA, Ramachandran M. Artificial intelligence, machine learning and the evolution of healthcare: A bright future or cause for concern? Bone & Joint Research. 2018;7(3):223–225. doi: 10.1302/2046-3758.73.BJR-2017-0147.R1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. López Gómez, D., & Criado, T. S. (2021). Civilising technologies for an ageing society? The performativity of participatory methods in socio-gerontechnology. In A. Peine, B. L. Marshall, W. Martin, et al. (Eds.), Socio-gerontechnology. Interdisciplinary critical studies of ageing and technology (pp. 85–98). Routledge
  22. Ma C, Guerra-Santin O, Mohammadi M. Smart home modification design strategies for ageing in place: a systematic review. Journal of Housing and the Built Environment. 2021 doi: 10.1007/s10901-021-09888-z. [DOI] [Google Scholar]
  23. Manzeschke A, Assadi G, Viehöver W. The Role of Big Data in Ambient Assisted Living. International Review of Information Ethics. 2016;24(05):22–31. [Google Scholar]
  24. McCradden MD, Joshi S, Anderson J, Mazwi M, Goldenberg A, Shaul Z. Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. Journal of the American Medical Informatics Association. 2020;27(12):2024–2027. doi: 10.1093/jamia/ocaa085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Merkel S, Kucharski A. Participatory Design in Gerontechnology: A Systematic Literature Review. The Gerontologist. 2019;59(1):e16–e25. doi: 10.1093/geront/gny034. [DOI] [PubMed] [Google Scholar]
  26. Mittelstadt, B. D., Floridi, L. (2016) The ethics of big data: Current and foreseeable issues. biomedical contexts. Science & Engineering Ethics 22, 303–341. [DOI] [PubMed]
  27. Morley J, Floridi L. The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. Science and Engineering Ethics. 2020;26:1159–1183. doi: 10.1007/s11948-019-00115-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Mortenson WB, Sixsmith A, Woolrych R. The power(s) of observation: theoretical perspectives on surveillance technologies and older people. Ageing & Society. 2015;35:512–535. doi: 10.1017/S0144686X13000846. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Neven L. By any means? Questioning the link between gerontechnological innovation and older people’s wish to live at home. Technological Forecasting & Social Change. 2015;93:32–43. doi: 10.1016/j.techfore.2014.04.016. [DOI] [Google Scholar]
  30. Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S. (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453. [DOI] [PubMed]
  31. Ontario Ministry of Health and Long-Term Care, & Toronto Central LHIN (2009). Workbook: Using the health equity impact assessment tool
  32. Peine A, Neven L. The co-constitution of ageing and technology – a model and agenda. Ageing and Society. 2021;41(12):2845–2866. doi: 10.1017/S0144686X20000641. [DOI] [Google Scholar]
  33. Peine A, Faulkner A, Jæger B, Moors EHM. Science, technology and the ‘grand challenge’ of ageing – Understanding the socio-material constitution of later life. Technological Forecasting and Social Change. 2015;93:1–9. doi: 10.1016/j.techfore.2014.11.010. [DOI] [Google Scholar]
  34. Petrakaki D, Hilberg E, Waring J. Between empowerment and self-discipline: Governing patients’ conduct through technological self-care. Social Science & Medicine. 2018;213:146–153. doi: 10.1016/j.socscimed.2018.07.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Piloto A, Boi R, Petermans J. Technology in geriatrics. Age & Ageing. 2018;47:771–774. doi: 10.1093/ageing/afy026. [DOI] [PubMed] [Google Scholar]
  36. Rubeis G. The disruptive power of Artificial Intelligence. Ethical aspects of gerontechnology in elderly care. Archives of Gerontology and Geriatrics. 2020;91:104186. doi: 10.1016/j.archger.2020.104186. [DOI] [PubMed] [Google Scholar]
  37. Safdar, N. M., Banja, J. D., Meltzera, C. C. (2020) Ethical considerations in artificial intelligence. European Journal of Radiology 122, 108768. 10.1016/j.ejrad.2019.108768 [DOI] [PubMed]
  38. Seifert A, Reinwand DA, Schlomann A. Designing and Using Digital Mental Health Interventions for Older Adults: Being Aware of Digital Inequality. Frontiers in Psychiatry. 2019;10:568. doi: 10.3389/fpsyt.2019.00568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Sim I. Mobile Devices and Health. New England Journal of Medicine. 2019;381:956–968. doi: 10.1056/NEJMra1806949. [DOI] [PubMed] [Google Scholar]
  40. Sixsmith A. New Technologies to Support Independent Living and Quality of Life for People With Dementia. Alzheimer’s Care Quarterly. 2006;7(3):194–202. [Google Scholar]
  41. Sixsmith, A. (2021). AgeTech: Technology-based solutions for aging societies. In I. Rootman, P. Edwards, M. Levasseur & F. Grunberg (eds.), Promoting the health of older adults: The Canadian experience. Canadian Scholars (forthcoming)
  42. Sixsmith, J., Fang, M. L., Woolrych, R., Canham, S., Battersby, L., Ren, T. H., & Sixsmith, A. (2019). Ageing-in-place for low-income seniors: Living at the intersection of multiple identities, positionalities, and oppressions. In O. Hankivsky, & J. S. Jordan-Zachery (Eds.), The Palgrave handbook of intersectionality in public policy (pp. 641–664). Springer
  43. Stix C. Actionable principles for artificial intelligence policy: three pathways. Science and engineering ethics. 2021;27(1):1–17. doi: 10.1007/s11948-020-00277-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Thanler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven
  45. Verloo, H., Kampel, T., Vidal, N., & Pereira, F. (2020). Perceptions about technologies that help community-dwelling older adults remain at home: Qualitative study. Journal of medical Internet research, 22(6), e17930 [DOI] [PMC free article] [PubMed]
  46. Walsh CG, Chaudhry B, Dua P, Goodman KW, Kaplan B, Kavuluru R, et al. Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence. JAMA Open. 2020;3(1):9–15. doi: 10.1093/jamiaopen/ooz054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Wawira Gichoya J, McCoy LG, Celi LA, Ghassemi M. Equity in essence: a call for operationalising fairness in machine learning for healthcare. BMJ Health & Care Informatics. 2021;28:e100289. doi: 10.1136/bmjhci-2020-1002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Weissglass, D. E. (2021). Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- and middle‐income countries. Bioethics. https://10.1111.bioe.12927 [DOI] [PubMed]
  49. World Health Organization (Accessed Mar 1, 2022). (‎2010)‎. A conceptual framework for action on the social determinants of health. World Health Organization. https://apps.who.int/iris/handle/10665/44489
  50. Wong, P. H. (2020) Democratizing algorithmic fairness. Philosophy & Technology 33, 225–244.

Articles from Science and Engineering Ethics are provided here courtesy of Springer

RESOURCES