Abstract
The Sammen Om Demens (together for dementia), a citizen science project developing and implementing an AI-based smartphone app targeting citizens with dementia, is presented as an illustrative case of ethical, applied AI entailing interdisciplinary collaborations and inclusive and participative scientific practices engaging citizens, end users, and potential recipients of technological-digital innovation. Accordingly, the participatory Value-Sensitive Design of the smartphone app (a tracking device) is explored and explained across all of its phases (conceptual, empirical, and technical). Namely, from value construction and value elicitation to the delivery, after various iterations engaging both expert and non-expert stakeholders, of an embodied prototype built on and tailored to their values. The emphasis is on how moral dilemmas and value conflicts, often resulting from diverse people’s needs or vested interests, have been resolved in practice to deliver a unique digital artifact with moral imagination that fulfills vital ethical–social desiderata without undermining technical efficiency. The result is an AI-based tool for the management and care of dementia that can be considered more ethical and democratic, since it meaningfully reflects diverse citizens' values and expectations on the app. In the conclusion, we suggest that the co-design methodology outlined in this study is suitable to generate more explainable and trustworthy AI, and also, it helps to advance towards technical-digital innovation holding a human face.
Keywords: Value-sensitive design, Applied AI, Citizen science, Stakeholder engagement, eHealth and mHealth technologies, Explainable and trustworthy AI
Introduction
By definition, Value-Sensitive Design (VSD) is a popular tripartite and iterative methodology for the design of ethical technologies compounded by three main phases: conceptual, empirical, and technological (see [36, 52, 53, 131–133] inter alia) that has been applied to develop manifold ethical technological-digital tools and infrastructures (more details in Sect. 3. As Friedman et al. sum up, VSD is “a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman et al. [56], p. 1; see also, Friedman et al. [55]). More recently, VSD has been connected to the so-called “AI for social good” [129].
In this paper, we aim to demonstrate how VSD can profitably be applied to the design of AI, digital health, and mHealth/eHealth1 technologies targeting vulnerable individuals such as people living with dementia, who are directly involved in all its three phases. The World Health Organization (WHO) defines mHealth/eHealth as “the use of mobile and wireless devices to improve health outcomes, healthcare services and, related research” ([141], p. 6). Consistently, an AI-based smartphone app belonging to such mHealth/eHealth technologies has been developed in the Sammen Om Demens (SOD) project (together for dementia). This project is promoted by the Danish municipality of Nyborg (Southern Denmark region) within the frame of “smart cities”2 initiatives and an overall strategy for the management and care of dementia3 actively involving the local community. Briefly, the SOD app can transform a smartphone in an AI-based tracking device (a form of the so-called surveillance technology) that automatically detects when a person with dementia gets lost, and hence, it triggers an alert to families and local, nearest volunteers to jointly involve them in a rescue operation (obviously, all of them must have previously downloaded the app in their smartphones). A citizen science stance has been adopted to validate an original co-design approach; namely, an inclusive and participative VSD that entails to proactively engage relevant stakeholders (both experts and non-experts), end users, and potential recipients of the technology in question in its technical development and in the outline of a protocol for its functioning. The rationale behind this strategy is to find a balance and legitimately trade-off between personal privacy of the users and functionality of the app (i.e., the surveillance ability).
Although a univocal definition is lacking, citizen science is often used as an umbrella term to describe a variety of ways in which citizens and the public can participate in research activities producing knowledge (as data collectors and co-analysts, as policy co-developers and so on) inspired by the principles of co-creation, cooperation, sustainability, and social impact. Frequently, the term is employed to bring together scientific ideals, such as participatory science, post-normal science, civic science, and crowd science (see [68, 138]). A new-fangled movement has been led by the European Citizen Science Association (ECSA) wherein a multidisciplinary community of scholars and practitioners has established the ten leading principles of citizen science [65, 66, 102]4. While providing new principles or definitions of citizen science is not an aim of this study, the general ideal behind this stance has been adopted to frame the participatory VSD of the SOD mHealth app and justify the direct involvement of a wide range of stakeholders; in primis citizens with dementia (CwD), but also relatives, social workers, healthcare professionals, and citizen representatives from the (local) public in the making of a prototype and the establishment of an agreed protocol for the app implementation and usage.
The pragmatic approach adopted in this study is rather original for two main reasons: (a) engaging stakeholders with cognitive or mental problems is not a common praxis either in dementia studies [12, 16, 109]5 or in the design of eHealth/mHealth technologies aimed at improving mental healthcare targeting individuals with cognitive impairments (see Maathuis et al. [84], p. 872). Additionally, in the existing VSD literature, a few studies have reported iterations directly coming from promised enhanced design practices incorporating values (see [140], p. 3) and this trend comprises eHealth/mHealth technologies. A remarkable exception concerning the involvement of patients with mental diseases in VSD is represented by the design of a “web-based QoL-instrument” for measuring people’s mental health problems (see Maathuis et al. [84]).6 Here, values, such as autonomy, efficiency, empowerment, universal usability, privacy, redefinition of roles, (redistribution) of responsibilities, reliability, solidarity, surveillance, and trust, are seen to be central for the technology to develop. Although stakeholders’ values are elicited by means of participative practices and empirical methods, a persistent limitation of this study relies on reporting only the first stages of VSD (conceptual and empirical phases). The most important phase of VSD, namely, the technical phase in which previously selected values are weighed and prioritized to be meaningfully embodied in the prototype of a brand-new technology having ethical and social importance, is missing.
This is not surprising since how to translate a set of abstract values into concrete design requirements and technical solutions is a big challenge in technological design and a notorious pending task in the current AI studies (see [35] Ch. 6–12, [89]). That is, producing embodied AI and technological-digital tools reflecting not only desirable values and virtuous principles but also capable of producing the expected positive ethical–social effects once implemented in society, seems difficult enough (e.g., AI enhancing human agency, trust or, fairness). Due to an exclusive focus on the conceptual level and in selecting a set of universal values, mainstream accounts in AI ethics [50, 51, 125] or digital humanities [14] seem unable to satisfactorily link theory and practice. As recently evidenced by Gerdes [61] despite substantial theoretical advancements in AI ethics, such as the emerging field of “AI ethics by design” [21, 129], either interdisciplinary teamwork or practical applications of participatory methods, jointly involving AI experts and citizens/the public in the design of AI-based technologies and algorithms, are still rather infrequent at an applied science level. However, these aspects seem crucial to advance the current applied research on explainable AI and trustworthy AI, especially in view of engendering public trust on algorithmic decisions and technical-digital innovation generated [48, 49, 95, 98]. Both interdisciplinarity and participatory design practices, we maintain, can be fruitful to open the “black box” in ways that the AI and algorithms generated can publicly be acceptable; namely, AI and algorithms can be perceived as effective, fair, secure, and trustworthy, since the design process is sufficiently transparent (inclusive, participative and, value-sensitive), and thus, outputs are well understood by recipients.
In this paper, the shortcomings regarding interdisciplinarity and stakeholder participation in the fields of AI ethics and AI ethics by design are addressed by focusing on the participatory VSD of the SOD mHealth app. While we do not discuss why the design of AI should involve ethical considerations (this is assumed from the beginning) or specific values (as usual in AI ethics), the main focus is on the rationale of the design process to extrapolating pragmatic considerations. In contrast to standard approaches to ethical AI, this study is less concerned with discussing ethical objections arising in the design of AI, and instead, it concentrates on how both elements (interdisciplinarity and stakeholder participation) have been accommodated in the actual doing of the SOD app and clarifies the advantages for delivering ethical AI with higher social relevance and utility. Quite originally, the following sections report all phases of VSD: from (empirical) value construction and value elicitation to the delivery after several iterations, all of them actively engaging a wide range of diverse stakeholders, of an embodied prototype meaningfully built on their values. In targeting the stakeholder engagement and participatory design activities within the SOD app/project, an emphasis is on how value dilemmas and value conflicts, regularly resulting from diverse people’s needs or vested interests, have been resolved pragmatically to deliver a unique digital artifact with moral imagination that fulfills vital ethical–social–democratic desiderata without weakening technical efficiency (as often believed by technical people/scholars). To our knowledge, it is the first time that an AI-based technology for the management and care of dementia (a comprehensive review in [67]) has been developed by directly involving a wide range of stakeholders, including CwD, through innovative, participatory VSD methods (on that, see also Andersen and Chiarandini [7]).
In the next sections, the importance of adopting a procedural ethics stance and procedural values (i.e., an original, procedural-deliberative VSD) and thus, of choosing design values underlying the SOD app by means of empirical techniques holding a dialogic-deliberative rationale, is stated and defended against rival views (Sect. 3). Next, we expound on how different people's values have been elicited by means of an inclusive and participative empirical procedure and translated into concrete design requirements and specific embodied technical-digital solutions (Sects. 4, 4.1, 4.2 and, 5). Finally (Sect. 6), we suggest that the co-design methodology resulting from the participatory VSD of the SOD mHealth app could profitably be extended to other AI fields and digital technologies to generate genuine, human-centered and society-oriented AI and technological-digital innovation.
The case: the SammenomDemens project (together for dementia)
The Sammen Om Demens (SOD) is a citizen science project co-funded by the municipality of Nyborg (Southern Denmark region) and TrygFonden (a private foundation)7 that is developing a smartphone app targeting citizens with mild-to-moderate forms of dementia with the aim of enhancing their and their relatives’ safety, well-being, and quality of life. The SOD mHealth app8 is the result of a close collaboration of public and private actors with a university9 to carry out genuine interdisciplinary teamwork and participatory science practices engaging relevant stakeholders. The methodology known as VSD has been adopted to conceptualize and develop both the user interfaces and (partly) the detection algorithms underlying the surveillance technology in question. As it is anticipated, the SOD mHealth app transforms a smartphone in a tracking device aimed at helping citizens with dementia (CwD) who got lost (so-called “wandering” behavior) to get in contact with their relatives and be rescued by way of the support of volunteering citizens. If a CwD deviates from his/her usual route, the SOD app automatically triggers an alarm that activates a relative and three volunteers in the proximity (which contacts and profiles are recorded by the app) who will intervene to bring the lost person to a safe place (e.g., the local dementia centre, a police station and, the like) to finally, meet the next of kin who is in charge of managing and co-ordinating the entire “rescue operation” (as we named this task in the SOD project). For final users (CwD, volunteers and, relatives), the SOD app consists of three main components, each supplying a different functionality. First, a knowledge bank10 providing citizens with ready-to-use information on dementia as a disease and the related services offered by the municipality of Nyborg. This part includes guidelines for recognizing the disease and obtaining advice on how to get aid and assistance from local health authorities specialized in the treatment and management of dementia and affected people. Second, a help component embedding the main functionalities of the app which, in the case of need, activates relatives and the three nearest volunteers. This is the element that has been developed by applying participatory VSD methods indeed, how stakeholders’ input has been used to shape its main technical features is described extensively (this paper, Sect. 5). Finally, the SOD app includes a recreational activity calendar meant to facilitate social gathering and enable encounters between CwD and local volunteers who share the same interests. An additional task of this last component (currently inactive)11 will be to assist CwD in finding accompanying persons for events or activities of common interest. A conceptual map summarizing the content of the SOD app is shown in Fig. 1.
The participatory VSD of the SOD mHealth app: the theory
In recent times, different ethical methods have been devised to incorporate human values in the design of AI and technological innovation and they have been applied to a number of different fields to develop digital-technological artifacts embracing an ethical stance (comprehensive reviews in [43, 140]).
Particularly, VSD12 has been applied successfully (first) to the field of information and communication technologies (ICTs) (see [18, 19, 47, 55, 57, 87], and over the years, the approach has been extended to the fields of energy technologies [28, 40, 91], artificial intelligence [29, 105, 128, 129], and nanotechnology [126, 127]. In the same line, VSD has been employed to develop specific medical technologies. For example, Denning et al. [39] used the VSD framework to inform the design of security mechanisms for wireless implantable medical devices (IMDs). While Schikhof et al. [115], Dahl and Holbø [37], and Burmeister [26] applied the principles of VSD to develop assistive technologies to be used in dementia care, Van Wynsberghe [135] used VSD to create a framework for the design of care robots. Whereas VSD has predominantly been applied in the field of Human–Computer Interaction (HCI),13 medical and care technologies in somatic health care, including dementia care, have been benefitted by this approach too. However, VSD has barely been used to support the design of eHealth/mHealth technological innovation (an exception is Maathuis et al., [84]) where ethical reflection is urgently needed to figure out which values can be ascribed in similar tools and how they would shape future technical developments and actual uses (on the need of ethical insights, see [27, 44, 82, 83, 110]. In this kind of research, one foremost critical aspect, from both a philosophical and practical standpoint, is how to select the values of ethical and social importance—according to what ethical theory—that VSD should rely on to develop ethical AI and digital technologies capable of addressing moral dilemmas and/or solving social problems associated to the technology under construction understood in its concrete settings of use [76, 85].
Two issues are central in practical applications:
the values of whom: should values be selected (a) by an expert judgment or (b) by engaging stakeholders from civil society or (c) by relying on (powerful) institutional promoters’ and/or technologist-designer’ values?
values should be elicited by means of what methods: (a) by abstract ethical reasoning and conceptual analysis or (b) by making a document analysis of former studies on similar technologies or (c) by applying empirical techniques (quantitative/qualitative methods) to engage stakeholders, end users, potential recipients, and the public?
Although latest appraisals of studies in AI ethics have shown that the philosophical underpinnings of this academic field should be rethought since, only a small number of texts mentions any major philosophical tradition or concept [9], some core meta-ethical aspects actually emerge. Mainstream approaches to ethical AI and principled reflection on digital technologies, for instance, so-called “technological design for wellbeing (WB)” and “AI for social good (AI4SG)” regularly adopt substantive ethics; namely, front-loading, expert-led, and top–down approaches to the selection of values and ethical principles informing technological-digital development and design practices (see [25, 50, 51, 76, 120, 121, 125, 130] inter alia). This interpretation is also common in the emerging field of “AI ethics by design” (see [21], also when adopting VSD methods [29, 129], in which relying on predefined ethical frameworks or an “objective list” of (allegedly) universal values is the standard.14 Regularly, AI-specific design principles, such as human autonomy, prevention of harm, fairness, and explicability, are indicated as the best ethical principles to develop prototypes of moral AI and related technologies ([129], p. 287). Even if these values are remarkable, the main limitation (intrinsic to any substantive ethics approach) is that values and ethical principles informing the design of AI and digital innovation are selected a priori by the theorist according to abstract ethical reflection or at best, some rather general ethical desiderata are chosen by the most powerful stakeholders (promoters and designers) regardless the ideals or specific needs of local stakeholders, end users, and other potential recipients. In this way, design values and moral standards behind them are supposed to be universally valid and applicable irrespective of the technological-digital product at stake and the diversity in purpose or uses in different contexts and for diverse people.
Substantive ethical views and related ethical design procedures might be particularly detrimental for delivering AI and digital innovation targeting vulnerable individuals (i.e., the SOD app is conceived for assisting cognitively challenging people), since their special needs would largely remain without consideration in the design process and, thus, in the embodied outputs. While a focus on substantive ethics and substantive values can be acceptable to formulate general guidelines and/or a posteriori assessment criteria for existing technologies, it seems rather insufficient in the case of technological-digital innovation, especially when highly contentious from an ethical–social standpoint. This is the case of surveillance technologies to which the AI-based SOD mHealth app legitimately belongs. They have been object of lively debates in recent years, but the efforts to go beyond the usual trade-off between privacy and security to ensure wider public acceptance have been scarcely effective [38, 58, 94]. The main problems of substantive, ethical views arise, precisely, when a set of abstract values, all of them reasonably important (the “moral overload” problem depicted by van den [134]), need to be selected, weighted, and prioritized to be translated into concrete design requirements and specific technical solutions. That is, the two phases of VSD suitable to deliver AI4SG called by van de Poel and Umbrello ([129], pp. 292–3) (a) formulation of design requirements and (b) prototyping (the most important aspects at an applied science level).
In both undertakings (formulating design requirements and prototyping), we argue that the procedural-deliberative VSD introduced by Cenci and Cawthorne [31], which tenets underpin the participatory VSD of the SOD mHealth app, can be more adequate and fruitful than traditional substantive approaches to the good and value. As it is stated (cf. [31], p. 2651), and in contrast to (substantive) objective list theories, “the main ethical task does not rely on identifying a list of universal, ethical values but on avoiding ethical and scientific paternalism by guaranteeing the correctness of the social choice procedure behind the selection of plural and incommensurable values of ethical and social importance by extending participation and engagement to the decision-making”. This version of VSD and the related “open framework most positive contribution” rely on “its procedural-deliberative tenets” that are “crucial to achieve ethical-democratic goals such as enhancing stakeholders’ agency, positive freedom, self-determination as well as to boost transparency, legitimacy and accountability of both the ethical procedure and the chosen values and normative ideals” (cf. [31], p. 2651). To further expound into the substantive vis á vis procedural distinction in ethical theory and justice15 exceeds the objectives of this study that is concerned instead, with practical applications and the operationalization of the insightful assumptions of the procedural view espoused. Major practical advantages of a VSD incorporating a procedural ethics stance are for enhancing both the explicability and fairness of the AI-based technological innovation generated (i.e., the SOD app) and, as a result, they can boost public trust on the capacity of AI to solve pressing social problems (i.e., the ability of the SOD app to assist vulnerable citizens with dementia in their daily life and improving their safety and wellbeing). In this vein, the peculiarity of a procedural-deliberative VSD (i.e., a participatory VSD) is that the conceptual and empirical phases are carried out simultaneously by engaging moral players (e.g., AI/technology developers, institutional promoters, end users, the public or, other relevant stakeholders) in creative, cooperative and, empowering dialogic processes (cf. [31], pp. 2654–5) aimed at selecting the foundational values informing the subsequent technological phase (see Fig. 2). The values previously selected are then, embodied in AI and digital tools having both moral and practical significance, since they are built on and aligned to the inputs of end users and other potential recipients (i.e., the direct and indirect stakeholders’ inputs).
Fundamentally, we maintain that values and normative ideals underlying the creation of ethical AI and digital innovation should not be the result of abstract ethical reflections delivering a set of universal values, and then, merely juxtaposed to the technical solutions adopted by AI experts external to the whole process. In its place, scientific practices mirroring democratic processes (e.g., deliberative workshops) and empirical investigations into the social preferences for value of a wide range of expert and non-expert stakeholders are believed to address both efficiency and vital ethical–social–democratic concerns more profitably. In the case of the SOD mHealth app, the espousal of the procedural VSD stance (instead of substantive ethics) has allowed pursuing scientific-technical and ethical–societal goals simultaneously; that is to say, pursuing the safety and well-being achievements for the primary end users (CwD) and more generally, key social objectives such as the social inclusion of disadvantaged individuals through technological progress and civic cooperation. The resulting co-design procedure is more ethical and democratic, since the values informing design practices are negotiated by way of interpersonal critical scrutiny and deliberation, while the needs of vulnerable, disadvantaged stakeholders—the worst off16 can be prioritized by design. Demonstrably, procedural values inferred from real deliberative scenarios, and not resulting from abstract moral reasoning and hypothetical deliberation, as traditional in classical procedural ethics views (e.g., John Rawls’ Theory of Justice [97]), are better suited to identify a “hierarchy of values” purposeful to “bridge the cooperative design gap” (see van de Poel [131]).
The procedural-deliberative stance behind the peculiar version of VSD defended here implies selecting design values by way of real deliberations among different stakeholder groups. The resultant ethical procedure can be not only fairer (due to the direct involvement of stakeholders), but it can also point to explicit values correlated to certain technical solutions, since AI experts are actively involved in the entire value elicitation process: to assist participants with limited technical knowledge in taking well-informed decisions. Yet, this value elicitation procedure to be reasonable enough demands to be carried out by means of recognized empirical techniques (and by fulfilling their criteria of validity, reliability, and objectivity-impartiality). In keeping with these pragmatic procedural-deliberative tenets, in the SOD mHealth app, design values have been selected empirically by engaging local stakeholders via qualitative inquiry, and then, such values have been embodied in an AI-based tool better aligned to the actual values, needs and, vested interests of end users and other potential recipients in the concrete context of its use (on “value alignment” see [41]). A similar pragmatic, ethical procedure is not only more transparent and accountable, but the selected values are legitimate, since they are really shared among participants. As a result, the AI and digital tool generated can be more explainable, trustworthy, and acceptable for all stakeholder groups involved in the design process (likely, beyond participants to a qualitative micro-study). Thus, once implemented, the AI generated is expected to have a superior ability to enhance either individuals’ well-being or the social good (and not only for the people immediately involved in the design process).
As acknowledged, extending participation to non-expert citizens (beyond promoters, designers, and ethicists) is a pending task and major flaw at an applied science level of mainstream accounts in AI4SG supporting substantive ethics, that are highly questionable regarding their actual capacity to boost individual well-being or the social good in practice. Demonstrably, both goals (fostering well-being and social good) can better be attainable, once less-standardized technical-digital products are generated by systematically engaging the major possible number of stakeholder groups via (qualitative) participatory-deliberative methods. In the participatory VSD of the SOD app, the adoption of focus group-based methods is the empirical strategy adopted to achieve acceptable levels of transparency, accountability, and fairness in the eliciting procedures behind design values and, thus, to increase the trust in the AI-based tool generated. Precisely, vital democratic desiderata have been satisfied, by granting open debates among different stakeholder groups in which conflicting values, needs, or vested interests (and related design implications) has been conferred and evaluated by concentrating on their underlying reasons and pragmatic consequences; mainly, to balancing individual and social relevance and utility. In the hierarchy of values ultimately adopted in the technical design of the SOD app (this paper, Sects. 4.1 and 4.2), special prominence has been given to the opinion of participants belonging to the most vulnerable end-user group (the worst off). Namely, CwD’s aspiration to safety and wellbeing but also their legitimate wish for safeguarding their personal autonomy and individual freedom (as well as an acceptable level of privacy).
Major benefits of selecting design values by means of empirical methods mirroring democratic-deliberative processes rely on that people’s dialogic interaction takes place at any moment of the decision-making while accepted standards of evidence and objectivity of the empirical methods applied are satisfied as well.17 In other words, the VSD of the SOD app can be taken as a paradigmatic example of applied AI incorporating the citizen science ideal, holding the values of liberal democracy and “interactive objectivity” [45, 46], since design values, and related design requirements, are obtained by scientific inquiry and design practices “ensuring processes of transparent critical discussion between (moral) agents” (see [13], p. 635). This social epistemological approach to objective science and social knowledge production is meant to counteract pragmatically the bias typically associated with the losses of value-freedom/neutrality by the traditional approaches to applied science (see [86], pp. 1–13 and Ch.14), and the well-known shortcomings of sociological studies of technology [79], without renouncing an ethical understanding or a moral foundation. Although an in-depth discussion on the epistemic value of deliberative democracy exceeds the scope of this study (on that, see [78]), we agree on the import of real democratic deliberations in selecting values for science and technological design and, similarly, the relevance of liberal democratic values for the ethical design of AI and digital technologies (on that, see also Bozdag & van den Hoven, [20]).
The normative view defended in this study (underlying a procedural-deliberative VSD) is not merely plausible but become also highly practicable precisely, by further supposing methodical stakeholder engagement (i.e., a multi-stakeholder approach18) and close interdisciplinary teamwork. Notoriously, both aspects were (until now) pending tasks in the ethical design of AI and technological-digital innovation, since in most of “ethics-first” methods, stakeholders’ involvement in establishing design values was absent or seldom successful (see [43]). Another clear advancement introduced by the VSD of the SOD mHealth app is indeed, that a multidisciplinary expert team (i.e., philosophers, social scientists, AI/data scientists, data protection, and communication specialists) has been actively involved in its creation from the beginning and through all phases of VSD (more details in the next sections). This strategy has avoided the usual division of labor between ethicists, social scientists, technologists, and AI experts, that is standard also in VSD (e.g., [136]), and thus, many ethical, epistemic and, pragmatic bias have been anticipated. That is to say, either the lack of technical understanding of ethicists and social scientists or the scarce concern of AI/technical experts for ethical problems and social consequences arising from the implementation and use of certain AI and technical-digital innovation in concrete settings have been avoided. In this way, the shortcomings of standard approaches to ethical AI and applied AI deriving by the lack of either ethical or technical understanding, and often resulting in mere a posteriori critical assessments (by ethicists) of existing AI and technologies on the basis of their uses and misuses or in unethical technologies (not based on explicit ethical reflexion and values), are circumvented. It is done by applying genuine co-design practices where ethical–social bias and misuses are prevented from the onset while preserving technical efficiency. The upshot is a functional, AI-based technology (the SOD app) that is highly sensitive to agents’ values and the context of use, since the ethical design procedure meaningfully incorporates the inputs of experts and non-expert, end users, and other potential recipients. Design values are obtained by accurate empirical investigations into the process of social value construction in concrete situations. Hence, the AI and digital innovation generated can be ethically and socially relevant and democratic while the underlying technological design process remains rigorous but also, open-ended, flexible, versatile and, largely re-applicable to other technical-digital products directed to different recipients or other settings (on that, see [31], pp. 2658–59).
To conclude, the participatory VSD design of the SOD mHealth app, which is rooted on a procedural VSD entailing practical applications of participatory-deliberative methods, is the conceptual-pragmatic strategy used to cooperatively involve end users, the public and different expertise in the production of ethical AI and technological-digital innovation. Thanks to this alternative method, the gap between theory and practice, rightly depicted by recent critical appraisals as inherent to mainstream AI ethics, also the emergent fields of AI ethics by design and VSD [61, 89]), and so harmful for applied AI studies, has been re-addressed and solved by genuine inter-/cross-disciplinary research and collaboration among different expertise. Moreover, the citizen science perspective implies the application of empirical techniques to scrutinize the values, needs, and vested interests of different stakeholders in relation to the AI-based tool under construction. The next sections report the participatory VSD of the SOD app (all three phases) and further elucidate how the positive effects for explicability, (perceived) fairness, or public trust can realistically emerge: due to the enhanced democracy of the ethical design procedure implemented.
The VSD of the SOD mHealth app: the practice
The main phases of the participatory VSD of the SOD mHealth app (see the chart below) substantially relies on the inclusive and participative tenets of the normative view (i.e., a procedural-deliberative VSD) seen in the former section. Excepting an initial conceptualization by the SDU/expert team (in collaboration with institutional promoters from the municipality of Nyborg), the VSD has been characterized by a co-design process substantially based on (focus group-based) deliberative workshops. The knowledge on values obtained has informed the development of a prototype of the user interfaces of the app and (to a lesser extent), it has provided insights to refine the tracking algorithms. While the rescue protocol has been discussed and agreed with all groups of participants (see Appendix 1), the workshops has also been determinant to establish which kind of information could be exchanged among the users (what info, how, and when) to legitimately trade off between personal privacy and the functionality of the app.
Design and test phases | Period | Tasks and strategies |
---|---|---|
VSD preliminary phase | Spring 2020 | The focus group design and a stakeholder analysis aimed at establishing direct and indirect stakeholders are carried out (also to elaborate a feasible recruitment strategy) |
VSD conceptual-empirical phase | August 2020 |
Focus groups with stakeholders aim to: Disclose stakeholders’ values, needs, and expectations on the SOD tracking app Figure out a suitable implementation strategy (based on stakeholders’ needs and values) Get insights regarding future development of contentious surveillance technology (to find out how to handle privacy issues and reduce controversies by increasing social utility) |
VSD technical phase | Fall 2020/2021 |
The collected data have been used to develop an early draft/prototype of the user interfaces and the tracking algorithms and, likewise, to establish an agreed “rescue protocol” by prioritizing the main stakeholders’ values and needs (I Iteration) Numerous user tests have been arranged to provide feedback on the prototype of the SOD app; mainly, in view of refining the user interfaces and communication tools, including the precise text and wording to display to the end users (II iteration) |
Communication tests | April 2021 | The feedback obtained from communication experts had the aim of improving the app interfaces concerning the displayed text, position of buttons and to choose colors |
Implementation tests | June 2021 | To register end users' impressions, an early prototype of the SOD app has been shown to representatives of the three main stakeholder groups: (a) CwD (woman, retired, 64 years old, not familiar with apps); (b) next of kin (woman, 42 years old, familiar with apps), and (c) potential volunteer (woman, 24 years old, student, technology savvy). The same tests have been performed by involving institutional promoters from the municipality of Nyborg |
Installation tests | January 2022 | Three CwD, two promoters from the municipality of Nyborg, and a social worker (dementia coordinator) from the local dementia centre in Nyborg have been invited to test the usability of the user interfaces of the app. The emphasis was on the “easiness to use” for people with cognitive problems |
Simulation tests | On-going | Selected informed participants from the main stakeholder groups are involved in “rescue missions” under controlled conditions |
Detection algorithms tests | Planned | Real-life data on the daily routines of CwD will be collected and used for tailoring, selecting, fine-tuning, and testing alternative “wandering” detection algorithms. A restricted group of informed CwD will download and use the SOD app for a certain period (either with or without the automatic detection function on). The aim is to gather real data on people’s daily routines, so that the original tracking algorithms, currently using synthetic data, will be adjusted accordingly (III iteration) |
Continuous assessment tests | Planned | Feedback from real users downloading the SOD app, and having no previous contact with the SOD project and the SDU team is expected. This final step will help to amend bugs, further improve communication tools, and carry out the last functionality adjustments |
The conceptual-empirical phases of VSD
As it is anticipated, in the SOD project, a multidisciplinary team is involved from the onset in the participatory VSD of the SOD mHealth app: data scientists (n. 7), social scientists/experimental philosophers (n. 2), political scientists (1), data security experts (1), and communication experts (3). In contrast to usual VSD practices applying predetermined ethical frameworks, the first two phases of VSD (conceptual and empirical phases) have been carried out simultaneously to allow that the values informing technological design practices could be elicited empirically by involving local stakeholders. In the preliminary stage of VSD (Spring 2020), a stakeholder analysis has been carried out and six categories of direct and indirect stakeholders with different roles and degree of involvement in the technological design have been identified by the SDU/expert team as follows: (a) institutional promoters (4); (b) CwD (6); (c) nurses, caregivers, and social workers, so-called dementia coordinators (DC) (4 + 2); (d) next of kin of CwD (6); (e) potential volunteers from local associations for dementia and elderly care (5 + 7); (f) GP doctors (2). Afterward (June 2020), preliminary consultations took place by involving members of the SDU/expert team, institutional promoters of the SOD initiative from the municipality of Nyborg, and social workers/DC from the Tårnparken local dementia centre. In this same period, deliberative workshops engaging all stakeholder groups above have been planned and a recruitment strategy has been arranged by the SDU/expert team. The initial aim was not only informing the technical development of the app but also, promoting its implementation at societal level (in the Danish municipality of Nyborg). The consultation with medicine experts, (2) GP doctors frequently working with dementia patients, took place online in the beginning of July 2020.19 The main direct and indirect stakeholders (by group) and their respective degree of involvement in the VSD of the SOD app can be resumed as follows (see Table 1).
Table 1.
Main actors (direct and indirect stakeholders) | Methods applied and n. of participants | Role and contribution to the SOD app design | Stakeholders’ degree of involvement in VSD |
---|---|---|---|
Group A: promoters from the municipality of Nyborg, leaders of local dementia centre and, SDU expert team |
2 focus groups: Group 1 (10 June 2020): 4 promoters of “smart city” initiative (at Nyborg kommune) + SDU expert team Group 2 (19 June 2020): 2 promoters of “smart cities” initiative + SDU expert team + leaders of Tarnparken’s dementia centre at Nyborg |
Set up goals and initiatives related to the SOD project Scheduling the development phases of the SOD app Drafting a possible implementation strategy (to discuss with the other stakeholders) |
Indirect stakeholders (but powerful): moderate influence on the technological design and the establishment of a “rescue” protocol |
Group B: citizens with dementia (=CwD) | 2 focus groups of 4 people each (3 CwD + 1 dementia coordinator as facilitator) | Most important stakeholders of the SOD tracking app so, they were engaged to uncover/grasp their values, needs and, expectations on the app and its future developments (also, regarding surveillance technologies more generally). | Direct, end users of the SOD app: their inputs had rather significant influence on the technological design phase |
Group C: Staff at local dementia centre (dementia coordinators, nurses and caregivers) | 1 focus group interview with 4 staff members |
Asked on patterns of dementia relevant for the design of the SOD app Asked for help in engaging CwD and their relatives (connected to the dementia centre) |
Indirect (but very interested) stakeholders: moderate influence on the technological design phase |
Group D: next of kin (NOK) | 1 focus group with 6 people (3 men and 3 women) |
Asked/reported on CwD’s ability to use and take advantage of the SOD app Helped in designing the main functions of the app and envisioning future developments (also, regarding surveillance technologies) |
Direct, end users of the SOD app: significant influence on the technological design phase and the establishment of a “rescue” protocol |
Group E: (potential) Volunteers |
2 focus groups: Group 1: 5 people (3 men + 2 women) Group 2: 7 people (2 men + 5 women) |
Asked about the citizenry’s willingness to volunteering with vulnerable social groups like CwD Asked about their availability to attend courses, training and get certification to be accepted as volunteers in the SOD app/project |
Direct, end users of the SOD app: but limited influence on the technological design (presupposed as technically skilled) |
Group F: GP Doctors | Expert consultation (online) with 2 GP doctors |
Asked on patterns of dementia as a disease (loss of memory, wandering) Gave advise on the SOD project/app communication strategy and implementation |
External consultancy: limited influence on the technological design phase |
Total: 41 stakeholders
26 Direct stakeholders
13 Indirect stakeholders (and 2 external consultants)
Recruitment and (value) eliciting strategy
Due to the COVID-19 pandemic, both the lockdown and rules about restricted no. of people (5–10) allowed for in-person meetings, and to guarantee the safest possible environment, participants have been recruited by relying on the patients–relatives network of the local dementia centre. First, CwD with mild-to-moderate forms of dementia and their relatives have been invited for participating to the planned workshops by their dementia coordinators (DC). When a citizen in Nyborg is diagnosed with dementia, he/she is offered to become associated with a DC who will offer to him/her and his/her next of kin aid, assistance, and supervision. The involvements of DC have been crucial for recruiting participants from this sensitive target group. DC have in-depth knowledge on local CwD, their stage of illness, and actual ability to contribute to the technological design of the app and their participation has increased the trust in the SOD project and the utility of the SOD app. Second, representatives of DC, nurses and, caregivers from the staff at the local dementia centre did participate over the basis of the work shifts scheduled in the days that the workshops were carried out. Regarding potential volunteers and again, due to COVID-19 restrictions, the prospect of inviting the whole local community via newspapers or social media has been overruled. Yet, to ensure a comprehensive representation of different social groups (e.g., young people, seniors, the disabled, and people already volunteering with the elderly), an invitation in official forums targeting these categories and related associations, has been sent out by the municipality of Nyborg. The participants at the volunteer workshops did spontaneously respond to this newsletter. Although at this point of the research, they were not asked to commit to become volunteers in the SOD project, they have agreed on providing feedback on the SOD app concept and share their impressions and opinions on both the initial design ideas and on controversial surveillance technologies more generally. Specifically, the participants to the volunteers’ workshops belonged to: (first group) two members of the local “youth Council” (trad. from Danish), two volunteers at the local care centre “Svanedammens Venner”, three persons often volunteering in activities for elderly people at local “Venner Club” and (second group) two members of the local “senior council” (trad. from Danish) and finally, two members of the local “disability council” (trad. from Danish) and two members of the local “Alzheimerforeningen” (Alzheimer Association) in the municipality of Nyborg.
Before every focus group discussion, an explanation (about 15–20 min) of the SOD project’s main goals and the technical working of the SOD app has been provided to all group participants by the SDU expert team: a data scientist managing and co-ordinating the various parts of the SOD project and a “knowledge facilitator” conducting the workshops in Danish. It has been explained that the only prerequisites for downloading and using the app are the ability to use a smartphone and having some familiarity with the use of apps (so, a wide public of potential users is estimated). At that time, a prototype of the app to show to participants was not available. This was deliberate, since the main purpose of doing focus groups has been to uncover different stakeholders’ attitudes in relation to surveillance technologies, such as the SOD tracking app is, and eliciting participants’ values, needs, and expectations to inform the subsequent technical phase of VSD (see Sect. 5). Even so, a set of initial design proposals mainly concerning the help function, including the user interfaces, have been discussed with participants. In view of developing not only of the SOD mHealth app but also, to develop better AI-based surveillance technologies in the future, participants have been invited to share their impressions on the main functions of the app, including the tracking device (based on algorithms). The prototype has been developed by concentrating on participants’ inputs (VSD first iteration) and further feedback has been obtained in the following user tests with selected participants (VSD second iteration). Accordingly, SDU data scientists did refine the initial design ideas and related technological solutions: to adjust them as much as possible to the features, values, and needs of the main end users (CwD) and the other potential recipients. As from the chart above (this paper, Sect. 4), a third iteration of VSD is planned. During the workshops held, participants belonging to the main stakeholder group (CwD) have been invited to become leading actors in the future steps of the SOD project and the app development. Indeed, further upgrades of the algorithms detecting wandering behaviors (see [6]) entail the use of collected, real data on people’s daily routines, as a substitute of synthetic data currently used (more details in Sect. 5). Participants will download the app in their smartphones and will be tracked, and thus, the data collected on their daily routes will be stored in a server computer (at SDU) and used to customize the detection algorithms to make them more sensitive to the features of real agents and the specific context of use.
Focus groups (methodological issues)
The data collection informing the VSD of the SOD app has been carried out on 18–19–20 August 2020 by means of qualitative (semi-structured) focus group interviews (see [10, 11, 81]) lasting in between 60 and 90 min, that has been held at Tårnparken dementia centre in Nyborg. In keeping with the so-called interactive design popular in human–technology interaction (see [118], Ch. 2 and 7), a qualitative approach has been chosen to frame the ethical design (the VSD of the SOD app) and to extrapolate the information about values suitable to take technical choices and develop technical solutions accordingly. A qualitative research design is often considered crucial to inform participatory design practices (see [119], Ch. 1, 5, 6, and 7) involving stakeholders, end users, and potential recipients of AI and technological-digital innovation. In fact, “information rich” knowledge and, likewise, “process-related meanings” (see [34, 93]) are required to properly inform technologists and data scientists on the fundamental patterns of agents and contexts, including their plural values and specific needs. This applied strategy has been crucial to extrapolate a coherent set of values meaningfully obtained by the dialogic interaction among participants to the various focus groups with different stakeholders thus, once embodied, such values are expected to deliver a technological-digital product (the SOD app) truly aligned to end users and the concrete context of use.
When compared to individual interviews, focus group-based workshops are certainly more difficult to arrange because of the need of finding agreements on essential practicalities (time, date, location, and so on). Nonetheless, this participatory-deliberative methodology has been preferred because of its well-known ability to create significant group dynamics and facilitate the collection of genuine social/group knowledge [10, 11, 73]. This approach was considered better suited to inform the subsequent technological phase of VSD in a genuine citizen science perspective. As seen in Sect. 3, the adoption of a procedural-deliberative VSD as normative view implicates the use of participatory-deliberative methods which allow participants to directly challenge and trade-off their values, needs, or vested interests to achieve authentic collective agreements on aims and desiderata regarding the AI and digital technology under construction. Another example of methodology based on an alternative working rationale are popular survey-based techniques in which individual views, also on values, are merely sum-aggregated to get a collective standpoint20; namely, the hierarchy of values obtained by surveys is not actually discussed by group participants who merely fill up a questionnaire in isolation). As acknowledged, respondents are more accurate in their answers when placed in a dialogic situation, since similar scenarios permit an immediate, intersubjective validation of values and aims among respondents or between respondents and the researcher (see [77], pp. 242–243). In the same line, individual interviews have also been considered inadequate to carry out the main design tasks underlying the SOD app, precisely, for the absence of interpersonal confrontation and interaction among group participants when selecting design values and identifying the main goals of the SOD app in relation to the different needs and expectations of end users and other potential recipients.
More importantly, technological design practices demand meaning rich discursive data which can barely be obtained by means of the rigid structure of a survey. Also, surveys are considered of doubtful applicability in the case of cognitively impaired people who might be unable to fill up a questionnaire: this is the main reason why population large studies targeting people with dementia regularly focus on relatives or nurses and doctors on their behalf [42].21 In the specific case of the deliberative workshops informing the VSD of the SOD app, both the knowledge facilitator and the data science expert involved in all workshops could provide technical explanations and additional background knowledge both on the functioning of the app and the implementation process to all participants, including to challenging CwD. Thanks to this strategy, value conflicts and value dilemmas have been instantly addressed and solved to envision what specific ethical principles or moral standards (correlated to specific technical solutions) could be acceptable for all stakeholder groups. Once applying focus groups-based deliberative workshops, there can be genuine co-creation of value and thus, co-design, co-development, and co-production of AI and technological-digital innovation based on truly social and democratic values. Therefore, the adoption of group-based deliberative techniques is pivotal to give concrete and unambiguous indications to experts/data scientists on specific embodied technical-digital solutions adjusted to the (self-chosen) values of all stakeholder groups involved. The result is a digital product like the SOD mHealth app that, once implemented, can really enhance the wellbeing and safety of the most vulnerable final users (CwD) whose values and needs have been prioritized by design. Explicitly, the technical functions of the SOD app have been conceived with the main purpose of removing the obstacles caused by dementia on vulnerable citizens (CwD) and improving their wellbeing and quality of life (also of their families) through civic solidarity and cooperation among residents (of Nyborg).
In practice, and due to the specific objectives of this study (technological design of a digital product), a traditional qualitative inquiry, either concentrating on the demographics of participants (unless very general features like gender or estimated age) or specifying the forms of dementia suffered by participants, has been considered redundant. Due to the close collaboration with the local dementia centre (and the municipality), we were confident enough about the relevance of participants for our objectives and, also to foster participation (of vulnerable CwD), we decided to keep the recruiting process as less intrusive as possible. Yet, an informed consent form and explicit authorization for data use reporting their essential demographic information have been signed by all participants. In practice, the data on values, needs, and expectations of the diverse stakeholders that inform the VSD of the SOD mHealth app have been extrapolated by means of a qualitative content analysis focusing on meanings (see [63, 111]). The group sessions have been video and audio recorded and transcribed by a student assistant to be then, coded manually by the (2) SDU team experts in social science and experimental philosophy. The raw data have been analyzed according to the main coding categories below to identify:
the values and needs of users, so that the expected utility and social impact of the app (for different stakeholder groups);
the behavioral aspects of dementia (e.g., disorientation and loss of memory) relevant to the design of the app and likewise, all stakeholder groups' attitudes on tracking devices and surveillance technologies targeting people with dementia;
the role of volunteers and degree of acceptance of their contribution by other stakeholders (mainly, CwD and relatives) with an emphasis on (a) their motivations for volunteering with CwD, (b) previous experience (if any) in volunteering with CwD, and (c) potential volunteers’ willingness to receive training to properly do so;
the suggestions of different stakeholder groups on how to deal with the controversial issues inherent to surveillance technologies in the SOD mHealth app and tracking devices in general; namely, how to make these technologies more sensitive to end users’ needs and values and how to solve the trade-off between privacy and safety by enhancing individual and social utility.
These broad coding categories have been chosen to represent, and concentrate on, the information immediately relevant to the development of the main components of the app, such as the help component (see Sect. 2) including the user interfaces and the detection algorithms (more details in Sect. 5) and, to get participant’s opinions regarding the main purpose of the SOD app (i.e., enhancing safety and wellbeing of vulnerable individuals by strengthening social solidarity between citizens). Moreover, real-time written notes on participants’ suggestions for further improving the app have been taken during the various workshops by the data scientist responsible of the SOD project who introduced the original concept behind the SOD app during all the sessions (the suggestions not incorporated in the SOD app, mainly for budget constraints, are reported in the conclusions). It is important to note that although we start with a deductive, concept-driven coding approach (i.e., a predefined set of coding categories, see [17]), the manual coding have allowed us to be more flexible and incorporate (and take into account) the extra-information emerged during the sessions (e.g., emotions of participants, their desires, and the like) that does not fall into the predefined description above (but still relevant for the VSD of the SOD app).
Values (by group) underlying the SOD mHealth app
Due to this paper size limitations, only the main empirical findings from focus groups with stakeholders are reported here: by focusing on their explicit impact on the main technical solutions adopted in the SOD mHealth app.
First of all, increasing safety and the mobility in security, together with the personal wellbeing of CwD, are the values with the highest priority for all groups of participants. Yet, considerable importance has been given to the purpose of relieving the burden caused by dementia on relatives and close friends (who have declared to feel lonely in this duty). These values are fulfilled in the SOD app concept by designing an AI-based tracking device targeting CwD that involves the local community in providing help as soon as they get lost. In the words of promoters from the municipality of Nyborg, in the SOD mHealth app/project “the focus are CwD first (…) co-creating together with other citizens a meaningful life, value and health for people (…) in order to live a worthy aging”. As usual in citizen science projects, the main goal of “smart cities” initiatives is to develop technological-digital solutions that can foster the local community’s active participation to activities and public services tailored to the special needs of vulnerable citizens; here, social inclusion and empowering disadvantaged individuals in view of enhancements in wellbeing and quality of life for them and their families are the aims/values strongly pursued.
Mainly, DC and staff at the local dementia centre have been questioned on the patterns of dementia as a disease and encouraged to share their opinion on the expected ability of CwD to use apps and technology in general (on the basis of their experience in working with them). To provide precise information to data scientists about how to make the app more functional and usable, the same inquiry has been directed to CwD and their next of kin (NOK) as well. In brief, although CwD seem not very familiar with technology and apps, all of them participating to the workshops declared that they could count with the help of relatives and friends for downloading or activating the SOD mHealth app (when ready). Even if not very proficient with technology and apps, CwD are the main recipients and end users of the SOD app concept; thus, this category of stakeholders could not be excluded by the VSD. In keeping with the participatory rationale behind the SOD app/project, and to shape the functionalities of the embodied tool in ways to circumvent, as much as possible, the cognitive or practical difficulties of people’s living with dementia (CwD), their participation has been considered fundamental both from an ethical and technical-practical standpoint. As seen before, the co-design approach has been considered central to develop a prototype meaningfully expressing the values and special needs of this challenging category of users and to promote the app usage based on its practical utility (likely beyond participant to this micro-study).
Consistently, promoting the actual usage of the SOD app—by explaining its functioning and benefits to all of its main recipients—has been seen as another fundamental research task within the SOD project. While the co-design approach adopted seemed pivotal for a good design of AI and technological-digital innovation targeting vulnerable individuals; likewise, the direct involvement of CwD from the onset of the project has been very important in view of achieving technical efficiency (a vital scientific value) and thus, in promoting the acceptance of the app at a society level (by stressing its helpfulness). One of the main concerns of the SDU expert team was, in fact, whether these users would be welcoming such a technology or whether factors, such as a difficult usage, losses of privacy, and other kinds of AI aversion reasons, would be dominating and leading to the rejection of the SOD app. Yet, the positive reception that the SOD app received in the workshops seemed mainly motivated by the wish of CwD to relieve the burden on their relatives and equally, the hope for preserving their freedom of movement (more details below). Visibly, the interaction between researchers and end users in the participatory design workshops has been decisive to the communication of the positive properties of the tool, and most likely, it might help outweighing the risk of rejection often implicit in similar surveillance technologies. The workshops have also strived to contribute to the creation of a positive culture towards the SOD app (and AI in general) in view of fostering its actual usage by participants (and beyond).
The CwD of the local dementia centre in Nyborg (including CwD participating to these interviews) can have the privilege to be the first in using the app (once ready). Thus, the differences between the tracking device currently in use at the centre (a very basic surveillance tool) and the SOD app (an AI-based system able to automatically detect and trigger an alert) have been elucidated to DC and nurses of the centre who assist CwD at a regular basis. In view of further enhancing the usability (for end users) and efficacy of the app, and to indicate some possible future developments, the main recommendation coming from DC, but common to all groups, was for having the tracking device incorporated in a smartwatch instead of a smartphone (due to budget constraints, this option has been ruled out). Other suggestions for improving the SOD app coming from DC’s group have been: (1) to give CwD the possibility to share a caption/headline with their relatives before leaving home (e.g., “I am leaving home” or “I am going to the supermarket”); (2) to add a function asking to CwD “are you lost?” and give them the possibility to answer “yes” or “no”. While the second option has been implemented as default behavior under the automatic detection function, the first option is planned but not yet available. It will be implemented in the app as a voluntary automatic feature that is able to learn stay places (i.e., home, office, the dementia centre, and the like), and once activated, it will announce to relatives when these places are abandoned. Due to obvious privacy concerns, this functionality demands to be activated deliberately (by CwD).
In the same line, while DC and relatives would have preferred that activation/deactivation functions of the app were not fully available to CwD (to avoid that the tracking functions could be inactivated on purpose), in the SOD app, a priority has been given to the wish of CwD to maintain acceptable degrees of autonomy, self-determination and control on their own choices (“being in charge”). That is, CwD are entitled/allowed to declare themselves inactive. Their approval is required to activate the tracking functions, and likewise, while the automatic detection alert is triggering, CwD are asked to clarify if they are actually lost: a text message is asking “are you lost” and they must answer in few instants “yes” or “no”. It happens before activating volunteer citizens (in Danish they are called “vejviser” that means “cicerones”) and starting a “rescue mission” entailing that the actual position/location of CwD will become visible on a map on the smartphones of both relatives and the three nearest volunteers. This technical decision has been taken, since CwD demonstrated scarce concern for their own privacy22 and a high level of self-awareness for their own condition. Hence, there is a high probability that they will keep the tracking mechanism activated when going out, since they are deeply conscious about the possibility of being, sooner or later, in need of help and assistance from a stranger (also in view of a worsening of their disease/dementia over time).
“I have dementia, so I will benefit from this too in the long run (…). Now, I have no problem in going around but I am aware that my illness means that is the way it is headed. And then, I think I would be extremely happy to have it (the app), because I like going out. Otherwise, I will be trapped at home” (CwD, age 65–70, male, no former experience with getting lost).
To make CwD more conscious about safety flaws when the device is inactive (if inactive the users’ location is not received by the server computer at SDU), a reminder is sent to their devices in the form of a message at regular intervals of time. Additionally, to further encourage the maintenance of an active status, another function allowing CwD to delete the locations visited on certain days has been included. As a result, most of them agree on the valuable contribution of tracking devices such as the SOD app in case of wandering behaviors and declared that they would have used the app once ready to be downloaded. CwD involved also expressed a strong willingness to participate in future steps of the SOD project aimed at refining the tracking algorithms underlying the app: to increasing the precision of the app in detecting wandering behaviors and its ability to look after the safety of other CwD (full details in Sect. 5).
“Well, if I can help make life better for people with dementia, I would really like to be part of that” (CwD, age 65–70, female, former experience with getting lost).
It is important to remark that positive attitudes of participants (especially CwD) regarding the SOD app clearly evolved during the workshops, and to the extent that additional information was granted, and their initial doubts were clarified and addressed in the group discussions. For example, when CwD have been told that they are allowed to enter themselves their personal data and delete the info registered on their daily routes at any moment (both stored by the server computer at SDU), their trust on the whole project and willingness to contribute to its goals seem to have increased considerably. From the point of view of data scientists (highly concerned with both privacy and functionality), a major challenge has been to deliver an AI-based tracking tool, substantially based on continuous data flows from smartphones to other devices that can be efficient while preventing unwanted personal data leakages. By following the existing regulation on data protection in Denmark,23 personal data of CwD are stored under their explicit consent (requested when downloading the app). Yet, under the advice of DC and relatives, the amount of personal information that CwD can provide via the SOD app has been reduced to a minimum, since CwD are known for overestimating their skills (and this can be misleading for volunteers when involved in a rescue mission). Both relatives and CwD are not allowed to know in advance the identity of volunteers (and vice versa) until these last ones agree on participating to a rescue mission. In this case, the current location of volunteers is transmitted to the server system (but their location is not stored) and their personal profiles become fully available to relatives (but only once a mission is accepted). Volunteers can also write personal information in their profile as free text (similarly to a social media app).24
The role of volunteers, whereas widely appreciated (by DC, next of kin/NOK and, CwD), has been limited in the SOD app by two different measures: (1) when downloading the app, an access code is required to become active as a volunteer. The code will be released by the municipality of Nyborg on the basis of former training (e.g., an introductory course) or an interview testing qualifications or skills for volunteering with vulnerable individuals like CwD. Also, (2) NOK are given total control over rescue missions (i.e., the privilege to start and close missions once a safe place is reached or to interrupt missions if they realize that a CwD is not actually lost but he/she is simply visiting an unusual place). This policy has been adopted to respond to the doubts of both families and DC about the capacity of average citizens to deal properly with cognitively challenging people and in situations like “wandering” where they might likely be confused and disoriented (or even violent). Potential volunteers appear to be strongly motivated by values such as social solidarity25 towards disadvantaged and vulnerable individuals. Even so, they demonstrated to be concerned about how CwD could react when wandering and approached by a stranger, and in fact, they seemed very receptive about the possibility to receive specialized training. Thus, the overall policy adopted in the SOD app can satisfy different stakeholders’ needs and worries (and likely encourage more people to download the app or to volunteering). To conclude, the danger of screening out too many potential volunteers have been carefully explored and the possibility of involving volunteers gradually, by allowing them to take more responsibility over time (e.g., a sort of reward system), has been discussed. Even if these options seemed appreciated by both relatives and DC, these proposals have been considered not practicable, and so, they have been discharged. One reason is that promoting a subset of volunteers could discourage others from entering the system. Most importantly, volunteers are expected to help in every circumstance of risk: if there is only one volunteer close to the lost person and/or reacting to the request of help, he/she must go (no matter what). Thus, it has been considered detrimental for the safety of CwD (main goal of the SOD app) to have volunteers with different degrees of responsibility and/or limited power of action.
Below Table 2, the full hierarchy of values (by groups) extrapolated and how they have been embodied in the SOD mHealth app is outlined. The table does not include the specific values of the SDU team that will be made explicit in the next section when addressing the technological phase of VSD.
Table 2.
Hierarchy of values (by group) inferred by the participatory VSD | ||
---|---|---|
Stakeholder group | Values (elicited via focus groups) | (Aligned) technical solutions adopted |
Group A—promoters from the municipality of Nyborg and SDU expert team |
Increase safety (of CwD) Increase wellbeing and quality of life of vulnerable citizens by involving the whole local community Relieve the burden caused by dementia on affected citizens and their families Enhance agency, freedom and, self-determination of vulnerable citizens (CwD) Empower vulnerable citizens (CwD) |
Overall, develop an AI-based tracking tool (in the form of a smartphone app) to assist CwD who get lost to connect with relatives, also by the help of local volunteers Becoming active as volunteer in the app requires an access code released by the municipality (after a training or skill test that certifies people’s ability to volunteering with challenging individuals like CwD). |
Group B—citizens with dementia (CwD) |
Keep control on decisions and daily activities Keep certain autonomy and self-determination Maintain certain independence from external help Increase their own safety (and wellbeing) Little concern for their own privacy Strong desire to relieve burden on families and friends |
CwD are given total control on activating/deactivating functions of the app (accessible to all users) Need to request the open consent of CwD to activate tracking functions Once the automatic detection is alerted, CwD must confirm their “lost” status and a video call with relatives is established before activating volunteers (and make the position of CwD visible on their smartphones) CwD are allowed to delete data on their position, stored by the app, in certain days The profiles of CwD contain little personal information and can be delated by them at any time (this option is available to all end users) |
Group C—Staff of local dementia centre (dementia coordinators, nurses and caregivers) |
Increase safety (of CwD) Promote individual wellbeing (of CwD) Protect individual freedom and self-determination (of CwD) Minimize threats to privacy (of CwD) Reduce workload on local dementia centre |
Rescue functions are not directly connected to dementia centre and its staff In order to address privacy issues (of CwD), the dementia centre is kept as eventual “safe place” in rescue missions (when the location is the closest to the lost person) |
Group D—Next of kin (NOK) |
Improve usability of the SOD app (for CwD) Give primacy to safety over privacy (of CwD) Relieve burden and responsibility (on NOK) for the daily care of CwD Allow freedom of movement of CwD while increasing perception of security in the relatives (via the tracking function) |
NOK are assigned the privilege of co-ordinating and closing rescue missions (when a lost CwD reaches a safe place) NOK can be informed by a text message when a CwD leave a “stay place” (home, office, dementia centre) |
Group E—Volunteers (V) |
Social solidarity Responsibility (towards CwD) Primacy to safety and wellbeing (of CwD) over privacy concerns Desire to help in fostering independence, personal freedom and, autonomy (of CwD) Desire to help in relieving the burden on NOK Collaborative attitude for lowering risks of adverse reactions by CwD when rescued (by taking specific training to volunteering) |
The profiles of lost CwD become visible to V only when a rescue mission is accepted (for not sharing the personal data of CwD unnecessarily) V can make phone calls to NOK during rescue missions Location data of V are not stored by the app and their personal profiles (made by them) can be delated at any time |
The technical phase of the VSD of the SOD mHealth app
The technical information provided in this section relies substantially on the contributions by [5–7]. This content is reinterpreted here in the light of the values extrapolated from the conceptual–empirical phases of the participatory VSD previously described. This section aims to further elucidate how particular values have been prioritized—by design—to adopt technical-digital solutions that could satisfy in primis the needs of CwD; explicitly, to achieve enhancements in safety but also, personal autonomy and well-being (largely intended as empowerment of vulnerable and disadvantaged individuals by the usage of technological-digital innovation) while being respectful of the opinions expressed by all stakeholder groups involved in the co-design process (and by following the regulation on sensible data usage, storage, and in fulfilling other data privacy concerns).
As a start, the research on digital health, eHealth, and mHealth applications for dementia patients and their caregivers has recently been conducted in a fair amount (see [23, 72, 142]). Some of the most identifiable symptoms of dementia are forgetfulness and memory loss, and thus, behaviors such as “wandering” and getting lost have been considered the most probable risks in view of people’s safety. Accordingly, quite a few mHealth apps that have been developed in recent times take the form of tracking devices monitoring the location of people with dementia (see [3, 4, 62, 64, 116, 122, 123, 137]). Among existing mHealth apps and devices with a functionality for tracking CwD’s current location, three of them also include an alert system for caregivers and relatives (see [62, 123, 137]). In comparison to existing tools, the peculiarity of the SOD mHealth app relies on going beyond the mere monitoring of the location of the lost person/CwD, and indeed, once a wandering behaviour is automatically detected, a rescue mission involving relatives and neighbor volunteers is triggered and thoroughly coordinated via its technical functionalities.
In taking the main technical decisions, and to develop technical solutions in keeping with the overall objectives of the SOD project, the efforts of the SDU/expert team concentrate on realizing the essential strategic purposes of an mHealth app such as the surveillance ability26 while mitigating, as much as possible, the well-known negative effects of tracking tools increasing the safety of recipients by inevitably infringing their privacy.27 Major critical issues are evaded, precisely, by the inclusive and participative VSD practices engaging end users (direct stakeholders) and other potential recipients (indirect stakeholders) in the shaping of the (AI-based) functions of app. This applied AI strategy can secure the fulfillment of vital ethical–social–democratic values such as transparency and accountability28, since, palpably, these conditions are necessary for explicability and fairness and in view of boosting public trust on both design procedures and outputs. In this vein, the SOD app can be considered an advancement comparing to the bias regarding foundational values related to (sensor-based assistive) technologies for dementia, another antecedent of the SOD app, which are often described in terms of their intrinsic deficits for privacy and self-esteem of recipients—engendered by the scarce accountability of design procedures and subsequent limited trust on the emancipation ability of devices so generated (see [37]).
To actually embody the values mentioned above, the technological-digital infrastructure underlying the SOD app has been organized in a front-end system (which is the software running on the users’ smartphones to implement the user interfaces—it entails the gathering and transmission of personal data) and a back-end system (which is the computer hardware and the software running on a remote server accessing the stored data as well as steering all operations and carrying out all computational tasks). On the one hand, the development of the front-end relies substantially on the application of participatory VSD methods, and the qualitative data on the values extrapolated all along its various phases, including later user tests, that are directly informing the structure of the user interfaces. On the other hand, the back-end, a computer server hosted at SDU wherein identity and tracking data are processed and stored and rescue missions are steered, is built on a microservices and serverless architecture widely acknowledged for facilitating the flexibility of implementation and maintainability of the infrastructure, so that the reliability and scalability of its performances (see [5, 99, 100]). This combination of scientific and ethical–social values is crucial within a VSD scheme, since the back-end is the central technical component in the legal framework for handling sensible data (see footnotes 23 and 24). Here, full names, birthdate, and telephone numbers of all users are stored. Nonetheless, location data are saved only for CwD: to allow the app to calculate the deviation from their usual routes and trigger an alert. Yet, in keeping with serious privacy concerns, in the download of the app, all users (of all user groups) are asked for consent for further processing their personal data (and they are informed that their data can be deleted at any time).
As from the former section (see Table 2), the specific values identified by means of VSD methods (focus group-based deliberative workshops with relevant stakeholders) have been crucial to establish the precise features of a) the components of the SOD mHealth app and b) the communication means related to the front-end device as follow:
Components: although the app contains three main components (i.e., a knowledge bank, the help component, and a recreational activities calendar; see Sect. 2), the main focus of the participatory VSD has been on the help component. The help component is likely the most important element of the SOD app, since it encapsulates its main functionalities. In a nutshell, the help component consists of an alarm that can be triggered by a CwD, by a relative or by an AI engine that is able to detect automatically if there is a wandering behavior. Depending on the type of user; relatives, CwD or, volunteers, this component is displayed differently. For the relatives, this element gives the possibility to localize their own CwD and starting a video call (e.g., if they are doubtful about their actual situation: “lost” or “not lost”). For CwD, this component allows triggering an alarm when they feel lost or disoriented. Finally, for volunteers, this component lets them become active and available for being contacted in case of an alarm is triggered in their proximity. When the alarm is triggered, the rescue mission starts and progresses by following the protocol agreed with all stakeholders that is then, steered by the software engine hosted in the back-end server (at SDU). According to the protocol established (see Appendix 1), the three nearest volunteers are activated by also keeping the relatives in the loop.
Communication means. The impact of the participatory VSD on the user interfaces of the SOD app has been substantial. Everything from colors to font, including the precise wording of information displayed, has been discussed with stakeholders; initially, the promoters from the municipality of Nyborg and communication experts have agreed on a proposal, and then, the same terminology have been used in the presentation at the user tests with the other stakeholder groups: to assess its communicative efficacy. Accordingly, the preliminary shape of the user interfaces underwent several changes because of the inputs collected in the various workshops carried out, also after the VSD conceptual–empirical phase by re-engaging representatives of the main stakeholder groups (mainly CwD). The resulting look of the main screenshots of the SOD app, containing its main functions, is shown in Fig. 3.
With regard to the features of the tracking algorithms adopted in the SOD mHealth app, the design and testing of AI and related algorithms for anomaly detection based on spatio-temporal data is original and rather innovative [5, 7]. From a technical standpoint, the automatic detection function of the SOD mHealth app is performed in the back-end computer server by the AI-based engine for spatial–temporal anomaly detection based on recorded location data specific for every CwD who have downloaded the app. Essentially, the detection engine is based on past and real-time location data from every CwD that are transmitted by the device and it relies on personalized models. The engine itself consists of two components: a periodic data processing pipeline and a real-time pipeline. While the main task of the periodic component is to extract the distinctive patterns in the historical movements of CwD from continuously incoming location data points, the real-time component compares latest trajectories with the patterns of the most frequent routes undertaken by CwD (that are stored by the engine) (Fig. 4).
At present, the “wandering” automatic detection engine has only been tested for synthetically generated data and results have been compared with alternative approaches in the literature [6]. Further, the whole architecture has undergone synthetical load tests to assess scalability and reliability [6]. Notoriously, synthetic data necessarily hold very few types of anomalous patterns; thus, due to the complexity entailed in accounting for all possible deviation cases, they are biased for testing personalized approaches (essential in the rationale of the SOD app). Another important issue is the sensitivity of the detection algorithms, and consequently, the accuracy of the alarm function. Particularly, it is important to determinate the right threshold for anomalous patterns for the app: when asking CwD about his/her status (“are you lost?”) and, then, triggering an alert to relatives and volunteers. Another important task is to establish how many false-positive detections a user can bear before becoming irritated and deactivating the app. Up to now, the user tests in protected environments have been carried out at precise stages of the VSD to refining the prototype of the SOD app; namely, by showing the user interfaces functions to selected stakeholders who have been asked on different issues (looking, functions, how intuitive the working logic of the app was and how its usability could be improved). In ways similar, the SOD app is in the process of being launched for a restricted set of users: to closely monitor its performance in realistic settings. The next step will be the assessment of simulated rescue mission episodes and, subsequently, the arrangement of real-life rescue mission episodes. Both strategies will serve not only to ascertain the correct functioning of the software, but likewise, they will help data scientists in further testing the procedure behind the detecting algorithms elaborated at earlier stages. In real-life rescue missions, the re-engagement and active participation of CwD, also not previously involved in the initial VSD, is planned. Participants from the main stakeholder group (CwD) will be tracked and their position data and daily routes will be stored by the app. The main propose is to discover new patterns and indicators of disorienting states to be used for enriching the database underlying the tracking/detection algorithms in real contexts of use. These activities (or sessions of human–technology interaction) are aimed at refining the SOD app to boost the trust of potential users on its ability to detect (surveillance) when a CwD is actually lost, before warning relatives and volunteers (and infringing a person’s privacy).
The concerns above are closely related to the degree of acceptability of the SOD app over the basis not only of its embodied foundational values (surely remarkable) but also, its performance from a technical standpoint. They might better be addressed when the inclusive and participative VSD approach will be extended to the entire design procedure now targeting tracking/detection algorithms. This plan is intended to respond to insistent pleas in applied AI for adopting design strategies having a dialogic rationale such as “dialogic guidelines” [60] to help experts in the implementation of co-design procedures better suited to deliver truly “explainable” user interfaces, and similarly, to advance the accuracy/sensitivity of the algorithms generated in view of “trustworthiness” [59–61]. The gaps between the design and operation of algorithms and our understanding of their ethical implications (on that, see [88]) are expected to be minimized considerably when adopting the co-design methodology (i.e., the participatory VSD) outlined in this study.
To our knowledge, it will be the first time that a participatory VSD actively involving stakeholders, end users, and other potential recipients of AI and technological innovation will be applied. The method outlined can generate genuine human-centered algorithms (on that, see Bader & Kaiser [8]), that are more ethical and democratic but also technologically efficient. It is required, however, that the digital infrastructure and data protection support briefly depicted in this section will be developed completely (already planned).
Final remarks
As shown, the SOD mHealth app is the result of an inclusive and participative VSD approach (i.e., a procedural-deliberative VSD) in which a wide range of relevant stakeholders (i.e., CwD, relatives, volunteers, local authorities, dementia coordinators, healthcare professionals, AI/data science experts, and social scientists/philosophers) have constructively been involved from the beginning in co-design practices (including the usability tests).
This unusual participatory VSD approach has been crucial to identify values and priority rules suitable to solve in practice the typical value dilemmas and value conflicts entailed in making ethical, collective decisions in real circumstances and in view of unambiguously informing the technical development of the main components of the SOD app (i.e., the communication means and user interfaces in the front-end) and establishing an agreed protocol for its usage (see Appendix 1) underlying “rescue missions” primarily benefitting most vulnerable stakeholders (CwD). In the SOD mHealth app priority is given to the safety of CwD while preserving acceptable levels of privacy, freedom, autonomy, and self-determination of these vulnerable individuals; indeed, important functions and components of the SOD app require their explicit consent to be activated. Important democratic desiderata like transparency and accountability of design procedures are largely fulfilled in the various stages of the participatory VSD and the consecutive iterations carried out (wherein the legitimacy of values embraced, and the digital tool generated is fostered accordingly).
Several suggestions for future developments of mHealth apps extrapolated by the focus groups with stakeholders that could not be realized in the SOD app (mainly due to budget constraints) are listed below:
Linking personal calendars with other applications (Google calendar, Outlook calendar, and Apple calendar). This function could help the detecting algorithm to better calculate deviations, since it would allow knowing the typical times of usual routes [e.g., a young (early onset) CwD who collects her daughter every day at school].
Smartwatch compatibility.
Voice interaction.
A match-making tool between CwD and volunteers interested in similar recreational activities.
A messaging system, for instance, a tool to exchange experiences among relatives or communicating with GP doctors and to introduce CwD to volunteers.
A function offering cognitive and physical training exercises suitable for the diagnosis and monitoring of individuals’ dementia status (this function appeals to GP doctors, but it was too technologically complex and,thus, too expensive to be realized in this occasion).
Another suggestion, mainly coming from relatives and DC, is to introduce a function by which CwD can send a caption (headline) before leaving home to indicate where they are going (e.g., by choosing from a list of usual activities or places “going to the bakery” or “going to the gym”), so that the app can detect immediately if/when a person is lost or going to another direction. In the SOD app, this option has been discharged to keep it as less invasive as possible. As seen before, the data processing tool incorporated already does something similar without being so intrusive. It learns and stores the usual routes of CwD and when a route is totally new or significantly different it triggers an alarm (“are you lost?”) to which people must reply in few seconds (“yes” or “no”). Even so, a function informing the relatives that their kin are moving away from a stay place (home, office, dementia centre, etc.) is currently under construction.
To conclude, a procedural VSD directly engaging a wide range of stakeholders in the technological development of the SOD mHealth app via participatory-deliberative methods brought about several novelties and contributions to mHealth technologies for the management of dementia and likewise, it can shed a new light on public attitudes regarding AI and contentious surveillance technologies (tracking devices). Further developments are expected when this inclusive, participative, and cooperative citizen science approach will be extended completely to the design of algorithms (already planned in the SOD project). There are some concrete indications that currently unsettled ethical, social, and epistemic challenges with explicability, fairness and, public trust raised by AI and algorithmic decisions (see [35, Ch. 6–12; 59, 71, 103, 104], inter alia), especially in AI for health and healthcare [107, 108], can better be handled once applying the methodology devised in this study. When the problems usually related to “explainable” and “trustworthy” AI and technical-digital innovation are re-addressed pragmatically and by increasing human agency and control [103] through more transparent and accountable democratic science practices, most likely, the public acceptability of outputs can be higher as well. These intuitions and claims seem largely validated by the obtainment of an AI-based tool (the SOD mHealth app) that can be considered more ethical and democratic, since the underlying technological design has meaningfully been informed by the values of a wide range of relevant stakeholders, including CwD and their legitimate right to privacy, personal autonomy, and freedom without undermining its technical efficiency; namely, the app’s surveillance ability and capacity to detect wandering behaviors and enhancing the safety of the vulnerable recipients.
Appendix 1
Although it is not really a part of the VSD technological phase, the protocol behind rescue missions managed by the SOD app is closely related to it, since it has been agreed with participants of all stakeholder groups during the various focus group-based workshops. The rescue protocol so agreed proceeds as follows: once an alarm is triggered a video call is established between the CwD and their next of kin (NOK). In the case of more relatives registered for every CwD, the one registered with the highest priority is chosen in the first place. After a call to the CwD (to verify if he/she is actually lost), the NOK decides either to close the case or instead, that a CwD needs help. In this latter case, the SOD app alerts the three nearest volunteers. When the notification that someone got lost is received on the device of a volunteer, an alarm sound is triggered to warn the user. Volunteers who receive a notification can see the location of the lost CwD in their smart phones and can accept or decline the request of help. After that, the NOK can see the state of volunteers on their smart phones: if they accepted and took up the mission heading to the location, or haven’t answered yet, or they have declined. In the case that a volunteer does not answer to a help request within 2 min, the back-end registers the volunteer’s answer as negative and contacts the next available volunteer in the surrounding area. If one of the three volunteers declines, a fourth volunteer is engaged (in the same way of the first three). Volunteers who have accepted will see a map with the shortest path to the location of the lost CwD and, for the first time, the personal profile of the lost CwD to rescue (a picture and the free text that CwD have entered themselves). In this process, the primary NOK has the option to phone call the activated volunteer/s. If two volunteers have already accepted the rescue mission then, the third volunteer is notified automatically by the back-end that two other volunteers have already responded positively and the help from a third volunteer is not needed. Similarly, the NOK can call the two volunteers to let them know that someone else is already on the case. To fulfill vital privacy concerns (not showing where CwD live), when volunteers meet the lost CwD will help him/her to reach the closest safe place (e.g., the dementia centre, a police station or the like) which position is also shown on their maps. When they reach the safe place (or earlier, if so agreed with NOK), volunteers notify the NOK, who is in charge of ending the mission. In the case that the primary NOK is not responding or is no longer active, the mission is taken over by the back-end that immediately notifies the volunteer who carried out/ended the mission that, in this case, he/she can close the mission (once the safe place is reached). In the unfortunate (and remote) case that none replies (both NOK and volunteers), the lost CwD will receive a text message in his/her smartphone saying that he/she must call the 112. A simplified sequence diagram of a rescue operation/mission is shown in Fig. 5. 5.
Declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Ethical approval
The manuscript complies with the ethical standards for research involving human participants stated in the journal guidelines, including the informed consent of participants.
Footnotes
In this paper, the mHealth/eHealth concepts are used interchangeably, as the definition by the World Health Organization ([141], p. 6) suggests.
“Smart cities” is the name of the program by the Danish municipality of Nyborg (the SOD app belongs to) that aims to boost the use of AI and technology in the daily life.
The SOD mHealth app is part of the strategy 2020–25 of Nyborg’s municipality regarding management of dementia and how to provide care to affected citizens. You can find it here: https://acrobat.adobe.com/link/review?uri=urn:aaid:scds:US:bc338554-4d33-3a0d-b38e-3eb35225c354 (in Danish).
Find the ECSA report on citizen_science_-_v1_final.pdf and the 10 Principles of Citizen Science—Retrieved on 26th February 2023.
The direct involvement of people living with dementia is often discouraged in scientific research (preceding policy making. By being based on surveys, applied studies regularly ask to healthy people or nurses and doctors how they would feel, think, or act if sick with dementia (see [15, 42, 75]. Some remarkable exceptions to these trends are found in [22, 124, 139].
“QoL-instrument” (quality of life instrument) is the name given to the tools by Maathuis et al. [84].
TrygFonden is a Danish non-profit private foundation supporting nationwide and regional projects increasing security in Denmark. The foundation works with five overarching goals: (1) Safer communities, (2) Strengthened membership relationship, (3) Solid insurance company, (4) Robust earnings, and (5) Strong cohesion. TrygFonden's strategy and action plan is aimed at high levels of distribution and dissemination and knowledge sharing by contributing to research and a good dialogue with external actors to develop joint solutions. See website: https://www.tryghed.dk/.
The SOD app further develops the concept of TrygFonden’s Hjerteløber app (heart runner app). When a person is hit by a heart attack, the app sends an alarm that activates the nearest volunteers who ought to reach the nearest public automated external defibrillator (AED) and bring it to the affected person in need of help.
Respectively, the municipality of Nyborg (Southern Denmark region) is the main promoter, while the Ovdal ApS (a private IT services company) is the developer/manufacturer of the user interface. On the top of that, the team at the University of Southern Denmark (SDU) has coordinated the different parts of the project, implemented the AI backend, and carried out the technological design applying participatory VSD methods.
Although the functions of the app vary slightly and different things will be available to different stakeholders, the information in the knowledge bank is the same for everybody and available to all. The plan is to connect the knowledge bank of the SOD app via a forum to “Nationalt Videnscenter for demens” (national knowledge centre for dementia). National Knowledge Centre for Dementia|National Knowledge Centre for Dementia (videnscenterfordemens.dk). Last retrieved 12th October 2022.
The SOD app entails a matching component that, although developed, is currently kept idle. Activities and information about the project can be found in the SOD project website (at the time of interviews, it was under construction). See https://sod.sdu.dk.
A critical introduction of VSD and VSD methods in [53].
A survey of VSD methods in human–computer interaction in [54].
On so-called “objective-list theories”, see [101].
Addressing the substantive vis á vis procedural distinction in ethical theory and justice exceeds the aims of this paper. For a classical philosophical paper discussing the tenets of substantive vis á vis procedural ethics, see [33]. In later times, and due to the existence of both a substantive and procedural version, the literature on the “Capability Approach” has reproposed the main terms of the debate in relation to technological development and technological design for WB or VSD (see [76, 92, 121]). This literature entails the procedural version of VSD defended by Cenci and Cawthorne [31] that is adopted to frame the SOD project and the app design. On the importance of substantive, universal value, see [90]. Conversely, a “non-idealistic” procedural theory of justice based on “capabilities” is defended by Sen [106], also in opposition to classical “ideal” procedural theories of justice based on hypothetical value judgements and deliberations such as [97].
On how to design technologies fulfilling the values of democracy and justice and so intended as empowering the worst-off, see [96].
See (Sect. 4) for a discussion on the logic of focus group-based inquiry and for the reasons for adopting focus group-based methods (e.g., participatory design, interactive design, participatory planning, and the like) to obtain a VSD truly inspired by democratic values.
The multi-stakeholder approach initially promoted to achieve EU2030 sustainable goals (europa.eu) is also a pillar in the EU priorities (2019–24) for a human-centered, sustainable and more prosperous digital future such as the mission on (A Europe fit for the digital age). This plan is aimed at empowering people in view of fulfilling basic digital rights (people at the centre, solidarity and inclusion, free choice, democratic participation, empowerment of individuals, etc.) This study is an example of (ethical) applied AI clearly oriented towards these goals.
The information obtained from GP doctors refers to issues related to dementia as a disease and how to publicize the app, once ready (through local GP doctors). Thus, data collected were not relevant for VSD and the technological development of the SOD app, and thus, the content is not fully reported (only the basics).
Further discussing the (relevant) implications of these on-going debates for adopting certain empirical methods instead of others when selecting values for science and technologies or design studies exceeds the scope of this empirical paper. Yet, the problems of different ethical views in aggregating value—in view of social values—are well known in the philosophical literature. On how different contemporary ethical views like utilitarianism, egalitarianism, and prioritarianism aggregate value, see [70]. On the “person separation” argument formulated to amend the ethical shortages of utilitarian ethical theories—entailing summative approaches to the good and value, see [97]. An overview of the ethical arguments to reject sum-aggregative stances in ethical theory, see [69]. On commensurability and incommensurability of values and their implications for comparison and aggregation in evaluative practices and decision-making, see (Chang [32]).
The enduring exclusion of people living with dementia from research activities or policymaking directed to them [12, 16, 109] is a main problem of existing research in dementia studies. A remarkable exception is represented by the Co-researcher INvolvement and engagement in dementia model (COINED) [124]. In this paper, we adopt this ideal and reinterpret it to adapt it to the fields of technological design for wellbeing and AI ethics by design.
These data are important, since it slightly contradict what is reported by population large studies on public acceptance of surveillance technologies in which the public attitude towards privacy, surveillance and security is not so clear [37, 58]. This vagueness also applies to citizens’ perceptions on tracking apps observed during the COVID-19 pandemic in different socio-cultural–political contexts [74].
For details regarding regulation, see the website of Danish data protection agency (datatilsynet.dk) and the Executive Order no. 1148 of 9 December 2011 (in Danish) regulating requirements for storing or accessing sensible data and express consent as end users of terminal equipment (retsinformation.dk). Last retrieved on 12 October 2022.
Full details on the treatment of personal data for both volunteers and CwD are available at: https://sod.sdu.dk/docs/terms/pmd/.
On how self-tracking devices in healthcare can reshape individual autonomy, social solidarity, and the perception of self, see [110].
On the interplay among surveillance and eHealth and mHealth technologies and their main societal challenges, see [2, 82]. The impact of surveillance technologies on healthcare and the welfare state is explored by [112].
On critical ethical-social issues related to the trade-off between surveillance and privacy, see also [1, 2]. On the value of privacy in science and tecnological innovation, see [113, 114].
The values of transparency and accountability are often seen as an instrument to increase trust in AI but also efficiency, and thus, they received several attentions in later studies on human–machine interaction and AI-based clinical decisions (see [24, 80, 107, 108, 117]).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Alessandra Cenci, Email: ale_cenci021@hotmail.com.
Susanne Jakobsen Ilskov, Email: jakobs1susanne@gmail.com.
Nicklas Sindlev Andersen, Email: sindlev@imada.sdu.dk.
Marco Chiarandini, Email: marco@imada.sdu.dk.
References
- 1.Adams S, Niezen M. Digital ‘solutions’ to unhealthy lifestyle ‘problems’: the construction of social and personal risks in the development of eCoaches. Health Risk Soc. 2016;17(7–8):530–546. doi: 10.1080/13698575.2015.1136409. [DOI] [Google Scholar]
- 2.Adams S, Purtova N, Leenes R, editors. Under Observation: The Interplay Between eHealth and Surveillance. Berlin: Springer; 2017. [Google Scholar]
- 3.Alharbi S, et al. Analyzing and implementing a mobile reminder system for Alzheimer’s patients. Int. Res. J. Eng. Technol. 2019;6(444–454):6. [Google Scholar]
- 4.Aljehani, S.S., et al.: Icare: applying iot technology for monitoring Alzheimer’s patients. In: 2018 1st International Conference on Computer Applications Information Security (ICCAIS), pp. 1–6 (2018)
- 5.Andersen, N., Chiarandini, M., Mauro, J.: Wandering and getting lost: The architecture of an app activating local communities on dementia issues. In: 3rd IEEE/ACM International Workshop on Software Engineering for Healthcare, SEH@ICSE 2021, Madrid, Spain, June 3, 2021, pp. 36–43 (2021a)
- 6.Andersen, N., Chiarandini, M., Jänicke, S., Tampakis, P., Zimek, A.: Detecting wandering behaviour of people with dementia. In: 2021 International Conference on Data Mining, ICDM 2021—Workshops, Auckland, New Zealand, December 7–10, 2021, pp. 727–733 (2021b)
- 7.Andersen, N., Chiarandini, M.: Together About Dementia. Accepted DemoTrack Paper ,IJCAI-ECAI2022, Vienna (2022)
- 8.Bader V, Kaiser S. Algorithmic decision-making? The user inter-face and its role for human involvement in decisions supported by artificial intelligence. Organization. 2019;26(5):655–672. doi: 10.1177/1350508419855714. [DOI] [Google Scholar]
- 9.Bakiner O. What do academics say about artificial intelligence ethics? An overview of the scholarship. AI Ethics. 2022 doi: 10.1007/s43681-022-00182-4. [DOI] [Google Scholar]
- 10.Barbour R. Qualitative Research Kit: Doing Focus Groups. London: SAGE Publications Ltd.; 2007. [Google Scholar]
- 11.Bloor M, Frankland J, Thomas M, Robson K. Focus Groups in Social Research. Thousand Oaks, CA: Sage; 2001. [Google Scholar]
- 12.Bartlett J, et al. Gender, citizenship and dementia care: a scoping review of studies to inform policy and future research. Health Soc. Care Community. 2018;26(1):14–26. doi: 10.1111/hsc.12340. [DOI] [PubMed] [Google Scholar]
- 13.Bedessem B, Ruphy S. Citizen science and scientific objectivity: mapping out epistemic risks and benefits. Perspect. Sci. 2020;28(5):630–654. doi: 10.1162/posc_a_00353. [DOI] [Google Scholar]
- 14.Berry DM, Fagerjord A. Digital Humanities. Cambridge: Polity Press; 2017. [Google Scholar]
- 15.Berron D, et al. Feasibility of mobile app-based assessment of memory functions: Insights from a citizen science study. Alzheimer’s Dement. 2020;16(Suppl 11):e039149. doi: 10.1002/alz.039149. [DOI] [Google Scholar]
- 16.Bethell S, et al. Patient engagement in research related to dementia: a scoping review. Dementia. 2018;17(8):944–975. doi: 10.1177/1471301218789292. [DOI] [PubMed] [Google Scholar]
- 17.Bingham, A.J., Witkowsky, P.: Deductive and inductive approaches to qualitative data analysis. In: Vanover, C., Mihas, P., & Saldaña, J. (eds.) Analyzing and Interpreting Qualitative Data: After the Interview, pp. 133–146. SAGE Publications
- 18.Borning A, Friedman B, Davis J, Lin P. ECSCW 2005. Dordrecht: Springer; 2005. Informing public deliberation: value sensitive design of indicators for a large-scale urban simulation; pp. 449–468. [Google Scholar]
- 19.Boyd, K., Rule, A., Tabard, A., Hollan, J.: Sharing, human values, and computer activity tracking. In: Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, pp. 233–236. ACM (2016)
- 20.Bozdag E, van den Hoven J. Breaking the filter bubble: democracy and design. Ethics Inf. Technol. 2015;17(4):249–265. doi: 10.1007/s10676-015-9380-y. [DOI] [Google Scholar]
- 21.Brey, P., Dainow, B.: Ethics by design and ethics of use in AI and robotics. The SIENNA project—Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact. Accessed 12 October, 2022. (2021). https://www.sienna-project.eu/digitalAssets/915/c_915554-l_1-k_sienna-ethics-by-design-and-ethics-of-use.pdf
- 22.Bryden C. Nothing About Us Without Us! 20 Years of Dementia Advocacy. London: Jessica Kingsley Publishers; 2015. [Google Scholar]
- 23.Brown A, O’Connor S. Mobile health applications for people with dementia: a systematic review and synthesis of qualitative studies. Inform. Health Soc. Care. 2020;45(4):343–359. doi: 10.1080/17538157.2020.1728536. [DOI] [PubMed] [Google Scholar]
- 24.Buomsoo K, Jinsoo P, Jihae S. Transparency and accountability in AI decision support: explaining and visualizing convolutional neural networks for text information. Decis. Support Syst. 2020 doi: 10.1016/j.dss.2020.113302. [DOI] [Google Scholar]
- 25.Burr C, Taddeo MR, Floridi L. The ethics of digital wellbeing: a thematic review. Sci. Eng. Ethics. 2020;26:2313–2343. doi: 10.1007/s11948-020-00175-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Burmeister OK. The development of assistive dementia technology that accounts for the values of those affected by its use. Ethics Inf. Technol. 2016;18:185–198. doi: 10.1007/s10676-016-9404-2. [DOI] [Google Scholar]
- 27.Catwell L, Sheikh A. Evaluating eHealth interventions: the need for continuous systemic evaluation. PLoS Med. 2009;6(8):e1000126. doi: 10.1371/journal.pmed.1000126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Correljé A, Cuppen E, Dignum M, Pesch U, Taebi B. Responsible innovation in energy projects: values in the design of technologies, institutions and stakeholder interactions. In: Koops BJ, Oosterlaken I, Romijn H, Swierstra T, van den Hoven J, editors. Responsible Innovation 2. Cham: Springer; 2015. pp. 183–200. [Google Scholar]
- 29.Capasso M, Umbrello S. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. Med. Health Care Philos. 2022;25:11–22. doi: 10.1007/s11019-021-10062-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Cawthorne D., Cenci A.: Value-sensitive design of a humanitarian cargo drone. In: Proceeding of the International Conference on Unmanned Aircraft Systems (ICUAS) Atlanta, USA, June 11–14 (2019)
- 31.Cenci A, Cawthorne D. Refining value sensitive design: a (capability-based) procedural ethics approach to technological design for well-being. Sci. Eng. Ethics. 2020 doi: 10.1007/s11948-020-00223-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Chang, R.: Value Incomparability and Incommensurability. In: Hirose, I., Olson J. (eds.) The Oxford Handbook of Value. Oxford University Press, Oxford. (2015). 10.1093/oxfordhb/9780199959303.013.0015
- 33.Cohen J. Pluralism and proceduralism. Chicago-Kent Law. Review. 1994;69:589–618. [Google Scholar]
- 34.Creswell JW. Qualitative Inquiry & Research Design. Choosing Among Five Approaches. Thousand Oaks: Sage; 2013. [Google Scholar]
- 35.Coeckelbergh M. AI ethics. MIT Press; 2020. [Google Scholar]
- 36.Cummings ML. Integrating ethics in design through the value-sensitive design approach. Sci. Eng. Ethics. 2006;12(4):701–715. doi: 10.1007/s11948-006-0065-0. [DOI] [PubMed] [Google Scholar]
- 37.Dahl, Y., Holbø, K.: Value biases of sensor-based assistive technology: case study of a GP tracking system used in dementia care. In: Proceedings of the Designing Interactive Systems Conference, pp. 572–581. ACM (2012)
- 38.Degli Esposti, S.: A roadmap for developing acceptable surveillance-based security measures. In: Conference Proceedings of 9th Security Research Conference “Future Security” Berlin, September 16–18, 2014, pp 71–80 (2014)
- 39.Denning, T., Borning, A., Friedman, B., Gill, B.T., Kohno, T., Maisel, W.H.: Patients, pacemakers, and implantable defibrillators: human values and security for wireless implantable medical devices. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 917–926. ACM (2010)
- 40.Dignum M, Correljé A, Cuppen E, Pesch U, Taebi B. Contested technologies and design for values: the case of shale gas. Sci. Eng. Ethics. 2016;22(4):1171–1191. doi: 10.1007/s11948-015-9685-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Dignum, V.: The Glass Box Approach: Verifying Contextual Adherence to Values. In “AI Safety 2019”. In: Proceedings of 28th IJCAI-19, China. (2019). http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160949
- 42.Di Lorito C, et al. Adding to the knowledge on Patient and Public Involvement: reflections from an experience of co-research with carers of people with dementia. Health Expect. 2020;23:690–705. doi: 10.1111/hex.13049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Donia J, Shaw J. Ethics and values in design: a structured review and theoretical critique. Sci. Eng. Ethics. 2021;27:57. doi: 10.1007/s11948-021-00329-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Donker T, Petrie K, Proudfoot J, Clarke J, Birch MR, Christensen H. Smartphones for smarter delivery of mental health programs: a systematic review. J. Med. Internet Res. 2013;15(11):e247. doi: 10.2196/jmir.2791. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Douglas H. The irreducible complexity of objectivity. Synthese. 2004;138:453–473. doi: 10.1023/B:SYNT.0000016451.18182.91. [DOI] [Google Scholar]
- 46.Douglas H. Science, Policy, and the Value Free Ideal. Pittsburgh: University of Pittsburgh Press; 2009. [Google Scholar]
- 47.Epstein, D. A., Borning, A., Fogarty, J.: Fine-grained sharing of sensed physical activity: A value sensitive approach. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 489–498. ACM (2013)
- 48.EU Commission: White Paper on Artificial Intelligence (2020).
- 49.EU Commission: Ethical Guidelines for Trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines.1.html. Last retrieved 27 February 2023 (2019)
- 50.Floridi L, Cowls J, Beltrametti M, et al. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind. Mach. 2018;28:689–707. doi: 10.1007/s11023-018-9482-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Floridi L, Cowls J, King TC, Taddeo MR. How to design AI for social good: seven essential factors. Sci. Eng. Ethics. 2020;2020(26):1771–1796. doi: 10.1007/s11948-020-00213-513. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Friedman B. Value sensitive design. Interactions. 1996;3(6):16–23. doi: 10.1145/242485.242493. [DOI] [Google Scholar]
- 53.Friedman B, Hendry DG. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge: MIT Press; 2019. [Google Scholar]
- 54.Friedman B, Hendry DG, Borning A. A survey of value sensitive design methods. Found. Trends Hum. Comput. Interact. 2017;11(2):63–125. doi: 10.1561/1100000015. [DOI] [Google Scholar]
- 55.Friedman B, Kahn P, Borning A. Value Sensitive Design: Theory and Methods. Washington: University of Washington; 2002. [Google Scholar]
- 56.Friedman, B., Kahn, P., Borning, A.: Human-computer interaction in management information systems: Foundations, chapter value sensitive design and information systems, pp. 348–372. Armonk, New York (2006)
- 57.Friedman B, Kahn PH, Borning A, Huldtgren A. Value sensitive design and information systems. In: Doorn N, Schuurbiers D, van de Poel I, Gorman M, editors. Early Engagement and New Technologies: Opening up the Laboratory. Dordrecht: Springer; 2013. pp. 55–95. [Google Scholar]
- 58.Friedewald M, Burgess P, Čas J, Bellanova R, Peissl,, editors. Surveillance Privacy and Security. Citizens’ Perspectives. London: Routledge; 2016. [Google Scholar]
- 59.Gerdes, A.: The quest for explainable AI and the role of trust. In: Griffiths, P., Kabir, M.N. (eds.) ECIAIR 2019—Proceedings of European Conference on the Impact of Artificial Intelligence and Robotics. Academic Conferences and Publishing International, Oxford, pp. 465–468 (2019)
- 60.Gerdes, A.: Dialogical guidelines aided by knowledge acquisition: enhancing the design of explainable interfaces and algorithmic accuracy. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1 Virtual event, pp. 243–57. Springer International Publishing (2021)
- 61.Gerdes A. A participatory data-centric approach to AI Ethics by Design. Appl. Artif. Intell. 2022;36(1):2009222. doi: 10.1080/08839514.2021.2009222. [DOI] [Google Scholar]
- 62.Ghanem, A.S., Alkhal, H.A.: A mobile cloud-based system for Alzheimer’s disease. In: 2018 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), pp. 1–5 (2018)
- 63.Graneheim UH, Lindgren B-M, Lundman B. Methodological challenges in qualitative content analysis: a discussion paper. Nurse Educ. Today. 2017;56:29–34. doi: 10.1016/j.nedt.2017.06.002. [DOI] [PubMed] [Google Scholar]
- 64.Gupta, G. et al.: Mobile health applications and android toolkit for Alzheimer patients, caregivers and doctors. Biol. Forum. 199–205 (2019)
- 65.Haklay, M., et al.: ECSA's Characteristics of Citizen Science. ECSA's Characteristics of Citizen Science, Zenodo (2020)
- 66.Haklay M, et al. Contours of citizen science: a vignette study. R. Soc. Open Sci. 2021 doi: 10.1098/rsos.202108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Hassan AZ, Khan A. A survey on wandering behaviour management systems for individuals with dementia. Int. J. Adv. Comput. Sci. Appl. 2019;10(2):531–545. [Google Scholar]
- 68.Hecker S, Haklay M, Bowser A, Makuch Z, Vogel J, Bonn A. Citizen Science: Innovation in Open Science, Society and Policy. London: UCL Press; 2018. [Google Scholar]
- 69.Hirose, I.: Skepticism about value aggregation. In: Hirose, I., Olson, J. (eds.) The Oxford Handbook of Value. Oxford University Press, Oxford. 10.1093/oxfordhb/9780199959303.013.0017 (2015)
- 70.Holtug, N.: Theories of value aggregation: utilitarianism, egalitarianism, prioritarianism. In: Hirose, I., Olson, J. (eds.) The Oxford Handbook of Value. Oxford University Press, Oxford (2015). 10.1093/oxfordhb/9780199959303.013.0015
- 71.Kearns M, Roth A. The ethical algorithm. The science of socially aware algorithms design. Oxford University Press; 2020. [Google Scholar]
- 72.Kim E, et al. Scoping review: the empowerment of Alzheimer’s disease caregivers with M-health applications. NPJ Digit. Med. 2021;4(1):131. doi: 10.1038/s41746-021-00506-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Klausen SH. Group knowledge: a real-world approach. Synthese. 2015;192(3):813–839. doi: 10.1007/s11229-014-0589-9. [DOI] [Google Scholar]
- 74.Kostka G, Habich-Sobiegalla S. In times of crisis: public perceptions toward COVID-19 contact tracing apps in China, Germany, and the United States. New Media Soc. 2022 doi: 10.1177/14614448221083285. [DOI] [Google Scholar]
- 75.Kowe, et al. Stakeholder involvement in dementia research: a qualitative approach with healthy senior citizens and providers of dementia care in Germany. Health Soc. Care Community. 2020 doi: 10.1111/hsc.13238. [DOI] [PubMed] [Google Scholar]
- 76.Jacobs N, Huldtgren A. Why value sensitive design needs ethical commitments. Ethics Inf. Technol. 2021 doi: 10.1007/s10676-018-9467-3. [DOI] [Google Scholar]
- 77.Kvale S. Interviews: An Introduction to Qualitative Research Interviewing. Thousand Oaks: Sage; 1996. [Google Scholar]
- 78.Landemore H. Beyond the fact of disagreement? The epistemic turn in deliberative democracy. Soc. Epistemol. 2017;31(3):277–295. doi: 10.1080/02691728.2017.1317868. [DOI] [Google Scholar]
- 79.Latour, B. Science in Action: How to Follow Scientists and Engineers Through Society. Harvard University Press (1987)
- 80.León G, Chiou E, Wilkins A. Accountability increases resource sharing: effects of accountability on human and AI system performance. Int. J. Hum. Comput. Interact. 2021;37(5):434–444. doi: 10.1080/10447318.2020.1824695. [DOI] [Google Scholar]
- 81.Liamputtong P. Focus Group Methodology: Principles and Practice. London: Sage Publications Ltd; 2011. [Google Scholar]
- 82.Lupton D. M-health and health promotion: the digital cyborg and surveillance society. Soc. Theory Health. 2012;10(3):229–244. doi: 10.1057/sth.2012.6. [DOI] [Google Scholar]
- 83.Lupton D. Quantifying the body: monitoring and measuring health in the age of mHealth technologies. Crit. Public Health. 2013;23(4):393–403. doi: 10.1080/09581596.2013.794931. [DOI] [Google Scholar]
- 84.Maathuis, et al. Exploring human values in the design of a web-Based QoL-instrument for people with mental health problems: a value sensitive design approach. Sci. Eng. Ethics. 2020;26:871–898. doi: 10.1007/s11948-019-00142-y. [DOI] [PubMed] [Google Scholar]
- 85.Manders-Huits N. What values in design? The challenge of incorporating moral values into design. Sci. Eng. Ethics. 2010;17(2):271–287. doi: 10.1007/s11948-010-9198-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Machamer P, Wolters G, editors. Science, Values and Objectivity. Pittsburgh: Pittsburgh University Press; 2004. [Google Scholar]
- 87.Miller, J.K., Friedman, B., Jancke, G., Gill, B.: Value tensions in design: The value sensitive design, development, and appropriation of a corporation’s groupware system. In: Proceedings of the 2007 international ACM conference on supporting group work, pp. 281–290. ACM (2007)
- 88.Mittelstadt BD, Allo P, Taddeo MR, Wachter S, Floridi L. The ethics of algorithms: mapping the debate. Big Data Soc. 2016;3(2):1–21. doi: 10.1177/2053951716679679. [DOI] [Google Scholar]
- 89.Morley J, Floridi L, Kinsey L, Elhalal A. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics. 2020;26:2141–2168. doi: 10.1007/s11948-019-00165-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Nussbaum, M.: In defense of universal values. In: Cecilia, W., Judith, S. & Elliott, T. (eds) Social Development, Social Inequalities, and Social Justice. Lawrence Erlbaum (2008)
- 91.Oosterlaken I. Applying value sensitive design (VSD) to Wind turbines and wind parks: an exploration. Sci. Eng. Ethics. 2014;21(2):359–379. doi: 10.1007/s11948-014-9536-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Oosterlaken I. Human capabilities in design for values. In: Van den Hoven J, Vermaas P, Van de Poel I, editors. Handbook of ethics and values in technological design: sources, theory, values and application domains. Dordrecht: Springer; 2015. pp. 221–250. [Google Scholar]
- 93.Patton, M.Q.: Qualitative Research & Evaluation Methods, 4th ed. Sage, Thousand Oaks (2015)
- 94.Pavone V, Ball K, Degli Esposti S, Dibb S, Santiago-Gómez E. Beyond the security paradox: ten criteria for a socially informed security policy. Public Underst. Sci. 2018;27(6):638–654. doi: 10.1177/0963662517702321. [DOI] [PubMed] [Google Scholar]
- 95.Peters U. Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque. AI Ethics. 2022 doi: 10.1007/s43681-022-00217-w. [DOI] [Google Scholar]
- 96.Pols A, Spahn A. Design for the values of democracy and justice. In: Van den Hoven J, Vermaas P, Van de Poel I, editors. Handbook of Ethics and Values in Technological Design: Sources, Theory, Values and Application Domains. Dordrecht: Springer; 2015. pp. 335–363. [Google Scholar]
- 97.Rawls J. The Theory of Justice. Cambridge: Harvard University Press; 1971. [Google Scholar]
- 98.Reinhardt K. Trust and trustworthiness in AI ethics. AI Ethics. 2022 doi: 10.1007/s43681-022-00200-5. [DOI] [Google Scholar]
- 99.Richardson, C.: Pattern: Microservice architecture. https://microservices.io/patterns/microservices.html, 2021. Accessed 24 May 2022 (2021a)
- 100.Richardson, C.: Pattern: Serverless deployment. https://microservices.io/patterns/deployment/serverless-deployment.html. Accessed: 24 May 2022 (2021b)
- 101.Rice C. Defending the objective list theory of well-being. Ratio. 2013;26(2):196–221. doi: 10.1111/rati.12007. [DOI] [Google Scholar]
- 102.Robinson LD, Cawthray JL, West SE, Bonn A, Ansine J. Ten principles of citizen science. In: London UK, editor. Citizen Science. UCL Press; 2018. pp. 27–40. [Google Scholar]
- 103.Russel, S.: Human Compatible AI and the Problem of Control. Viking Press (2019)
- 104.Russell S., Norvig P.: Artificial Intelligence: A Modern Approach, 4th edn. Prentice Hall (2020)
- 105.Santoni de Sio F, Van den Hoven J. Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI. 2018;5:15. doi: 10.3389/frobt.2018.00015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Sen A. The Idea of Justice. London: Allen Lane; 2010. [Google Scholar]
- 107.Shaban-Nejad, A., Michalowski, M., Buckeridge D.L. (eds) Explainable AI in Healthcare and Medicine. Stud. Comput. Intell. 10.1007/978-3-030-53352-6_1 (2021).
- 108.Shaban-Nejad A, Michalowski M, Brownstein JS, Buckeridge DL. Explainable AI: towards fairness, accountability, transparency and trust in healthcare. IEEE J. Biomed. Health Inform. 2021;25(7):2374–2375. doi: 10.1109/JBHI.2021.3088832. [DOI] [Google Scholar]
- 109.Shannon, et al. Dementia-friendly community initiatives: an integrative review. J. Clin. Nurs. 2019;28:2035–2045. doi: 10.1111/jocn.14746. [DOI] [PubMed] [Google Scholar]
- 110.Sharon T. Self-tracking for health and the quantified self: Re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philos. Technol. 2017;30(1):93–121. doi: 10.1007/s13347-016-0215-5. [DOI] [Google Scholar]
- 111.Schreier M. Qualitative Content Analysis in Practice. London: Sage Publications; 2012. [Google Scholar]
- 112.Sorell T, Draper H. Telecare, surveillance, and the welfare state. Am. J. Bioeth. 2012;12(9):36–44. doi: 10.1080/15265161.2012.699137. [DOI] [PubMed] [Google Scholar]
- 113.Steijn, W.M., Schouten, A.P., Vedder, A.H.: Why concern regarding privacy differs: the influence of age and (non-) participation on Facebook. Cyberpsychol. J. Psychosoc. Res. Cyberspace. (2016)
- 114.Steijn WM, Vedder A. Privacy under construction: a developmental perspective on privacy perception. Sci. Technol. Hum. Values. 2015;40(4):615–637. doi: 10.1177/0162243915571167. [DOI] [Google Scholar]
- 115.Schikhof Y, Mulder I, Choenni S. Who will watch (over) me? Humane monitoring in dementia care. Int. J. Hum. Comput. Stud. 2010;68(6):410–422. doi: 10.1016/j.ijhcs.2010.02.002. [DOI] [Google Scholar]
- 116.Siddiq, K. et al.: Cared: non-pharmacological assistance for dementia patients. EAI Endorsed Trans. Pervasive Health Technol. 4(16) (2018).
- 117.Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 2021;36:535–545. doi: 10.1007/s00146-020-01019-6. [DOI] [Google Scholar]
- 118.Sharp, H., Preece, J., Rogers, Y.: Interaction Design, 5th edn. Wiley (2019)
- 119.Simonsen J, Robertson T. Routledge international handbook of participatory design. Routledge. 2012 doi: 10.4324/9780203108543. [DOI] [Google Scholar]
- 120.Steen M. Virtues in participatory design: cooperation, curiosity, creativity, empowerment and reflexivity. Sci. Eng. Ethics. 2013;19:945–962. doi: 10.1007/s11948-012-9380-9. [DOI] [PubMed] [Google Scholar]
- 121.Steen M. Organizing design-for-wellbeing projects: using the capability approach. Des. Issues. 2016;32(4):4–15. doi: 10.1162/DESI_a_00412. [DOI] [Google Scholar]
- 122.Stutzel, M.C. et al.: Smai-mobile system for elderly monitoring. In: 2016 IEEE International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–8 (2016)
- 123.Surendran D, Rohinia M. BLE bluetooth beacon-based solution to monitor egress of Alzheimer’s disease sufferers from indoors. Proc. Comput. Sci. 2019;165:591–597. doi: 10.1016/j.procs.2020.01.053. [DOI] [Google Scholar]
- 124.Swarbrick, C., Open Doors, Scottish Dementia working group, EDUCATE, Davis, K., & Keady, J.: Visioning change: co-producing a model of involvement and engagement in research (Innovative Practice). Dement. Int. J. Soc. Res. Pract. 18(7–8), 3165–317 (2016). [DOI] [PubMed]
- 125.Taddeo M, Floridi L. How AI can be a force for good. Science. 2018;361(6404):751–752. doi: 10.1126/science.aat5991. [DOI] [PubMed] [Google Scholar]
- 126.Timmermans J, Zhao Y, van den Hoven J. Ethics and nanopharmacy: value sensitive design of new drugs. NanoEthics. 2011;5(3):269–283. doi: 10.1007/s11569-011-0135-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127.Umbrello S. Atomically precise manufacturing and responsible innovation: a value sensitive design approach to explorative nanophilosophy. Int. J. Technoeth. 2019;10(2):1–21. doi: 10.4018/IJT.2019070101. [DOI] [Google Scholar]
- 128.Umbrello, S., De Bellis, A.F.: A value-sensitive design approach to intelligent agents. In: Yampolskiy (ed.) Artificial Intelligence Safety and Security. CRC Press, Boca Raton (2018)
- 129.Umbrello S, van de Poel I. Mapping value sensitive design onto AI for social good principles. AI Ethics. 2021;1(3):283–296. doi: 10.1007/s43681-021-00038-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Van de Poel, I.: Can we design for well-being? In: Brey, P., Briggle, A., Spence, E. (eds.) The good life in a technological age. Routledge, London (2012)
- 131.Van de Poel I. Translating values into design requirements. In: Michelfelder D, McCarthy N, Goldberg D, editors. Philosophy and Engineering: Reflections on Practice, Principles and Process. Amsterdam: Springer; 2013. pp. 253–266. [Google Scholar]
- 132.Van de Poel I, Royakkers L. Ethics, Technology, and Engineering: An Introduction. London: Wiley-Blackwell Publishing; 2011. [Google Scholar]
- 133.Van den Hoven J. Moral methodology and information technology. In: Himma KE, Tavani HT, editors. The Handbook of Information and Computer Ethics. New Jersey: Wiley; 2008. pp. 49–68. [Google Scholar]
- 134.Van den Hoven J, Lokhorst GJ, Van de Poel I. Engineering and the problem of moral overload. Sci. Eng. Ethics. 2012;18:143–155. doi: 10.1007/s11948-011-9277-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Van Wynsberghe A. Designing robots for care: care centered value-sensitive design. Sci. Eng. Ethics. 2013;19(2):407–433. doi: 10.1007/s11948-011-9343-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Van Wynsberghe A, Robbins S. Ethicist as designer: a pragmatic approach to ethics in the lab. Sci. Eng. Ethics. 2014;20(4):947–961. doi: 10.1007/s11948-013-9498-4. [DOI] [PubMed] [Google Scholar]
- 137.Vergara, J.A., et al.: A pervasive and ubiquitous mobile health application for tracking people with disabilities. In: 2015 10th Computing Colombian Conference (10CCC), pp. 206–213 (2015)
- 138.Vohland, K., et al.: The Science of Citizen Science. Springer, Cham. 10.1007/978-3-030-58278-4_1 (2021)
- 139.Whitman L. People Living with Dementia Speak Out. London: Jessica Kingsley; 2016. [Google Scholar]
- 140.Winkler T., Spiekermann, S. Twenty years of value sensitive design: a review of methodological practices in VSD projects. Ethics Inf. Technol. 1–5 (2021)
- 141.World Health Organization: mHealth: New Horizons for Health Through Mobile Technologies: Based on the Findings of the Second Global Survey on eHealth. Global Observatory for eHealth Series, vol. 3. World Health Organization, Geneva (2011)
- 142.Yousaf K, et al. A comprehensive study of mobile-health based assistive technology for the healthcare of dementia and Alzheimer’s disease (ad) Health Care Manag. Sci. 2020;23:287–309. doi: 10.1007/s10729-019-09486-0. [DOI] [PubMed] [Google Scholar]