Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 Sep 17;67:101748. doi: 10.1016/j.techsoc.2021.101748

Applying contextual integrity to digital contact tracing and automated triage for hospitals during COVID-19

Marijn Martens 1,, Ralf De Wolf 1, Karel Vadendriessche 1, Tom Evens 1, Lieven De Marez 1
PMCID: PMC8448401  PMID: 34566203

Abstract

To control and minimise the spread of COVID-19, various technological solutions have been proposed. In this research, we focus on digital contact tracing and automated triage for hospitals. We conducted an online survey in Flanders (N = 1708) to investigate the perceived appropriateness of these systems based on the Contextual Integrity framework, as developed by Nissenbaum [1]. For digital contact tracing, significant differences were found between the appropriateness of using various types of data for different goals. Precise individual location data (i.e. GPS) was considered to be least appropriate and much less appropriate than proximity data (i.e. Bluetooth) or coarser location data (i.e. GSM). Goals for digital contact tracing with a high individual impact were considered to be less appropriate than goals with a low individual or societal impact. In addition, the data showed that respondents would find the usage of digital contact tracing to be less appropriate after the pandemic, underlining the temporality of this technological solution. For automated triage, the results indicated that gender is perceived to be significantly less appropriate than the other types of data, including age, to determine the priority of treatment.

Keywords: Survey, Contextual integrity, Privacy, Contact tracing, Automated triage, COVID-19

1. Introduction

Since the end of 2019, and throughout 2020 and 2021, COVID-19 has held the world in its grip, resulting in what Poom et al.([2]; p. 1) described as the ‘biggest disruption to individual mobilities in modern times’. Indeed, this pandemic has shaped modes of transportation, the way we communicate and even how we socialise with each another. Various containment measures have been put forward to tackle this crisis. These include, but are not limited to, contact tracing, quarantines, ensuring social distance and even curfews. All these measures have been put into effect to minimise infections and excess mortality. Because of economic and societal pressures governments have also stimulated the investigation and development of technological solutions [3], such as automated decision-making systems (ADM). These systems are defined as ‘procedures in which decisions are—partially or completely—delegated to automatically executed decision-making models to perform an action’([4]; p. 9, 2020). Digital contact tracing and digital automated triage are two prominent examples of ADM.

Academics have quickly come to argue that the deployment of these tools, which draw in personal information, challenges the privacy of individuals and accountability mechanisms [[5], [6], [7]]. Moreover, legal frameworks that are used to regulate these tools are scarce, inadequate, often unclear and complex [[8], [9], [10]]. Governments and academics alike have been developing ethical guidelines to create a framework that is compliant with regulation to embed further the implementation of these ADM [11,12].

Considering the current discussions about ADM and privacy, we find it noticeable how much emphasis technical experts and legal scholars have been placing on privacy as control or privacy as secrecy [13]. For example, the focus has been on providing control options to control information flows (e.g. opt-in mechanisms in the Exposure Notification Systems of Apple and Google) and/or ensuring secrecy (e.g. by anonymising and decentralising personal data). Echoing Nissenbaum [1,14]; we argue that secrecy and control are important elements of privacy but are also mindful of how privacy is also subject to context-dependent norms. Nissenbaum [1] put forward a contextual approach to privacy (i.e. Contextual Integrity framework) and argued for privacy to be treated in terms of ‘appropriateness’, rather than control or secrecy alone.

It is currently unclear how the public perceive the appropriateness of ADM tools relative to the data they use and their goals. Moreover, how willing citizens are to rely on such systems remains vague. Indeed, besides the voices of legal and technical experts, we argue that the voices of citizens themselves should be included. Using a survey study (N = 1708), we have investigated people's attitudes and concerns towards two prominent ADM tools: digital contact tracing and automated triage for hospitals. We put forward the following research question: ‘To what extent and under what conditions does the public think it appropriate to use contact tracing applications and automated triage in the context of limiting the spread of COVID-19?’ Before delving into our empirical work, we first situate, substantiate and operationalise our focus on privacy norms and the Contextual Integrity framework.

2. Theoretical considerations

2.1. Surveillance and a datafied society

Considering the many ways that citizens are unknowingly and unwillingly being tracked online (e.g. Cambridge Analytica), the profiling of minority groups (e.g. Muslim Uyghurs who are monitored by facial recognition technologies in China) and even the sheer amount of personal data voluntarily being shared worldwide through social media, one could believe that every part of our everyday life is logged and stored. Van Dijck [15] referred to the process of datafication as ‘life mining’, whereby surveillance is continuously performed by scraping bits and pieces of (meta)data for goals that are not necessarily defined.

The relation between surveillance, privacy and the ways that data are collected and used has not gone unnoticed. To understand contemporary surveillance, many refer to the panopticon drawing back to ‘Surveiller et Punir’ of Michel Foucault [16,17]. Relying on Bentham's panopticon as a metaphor, Foucault [17] explained how modern society and its institutions operate. He argued that the panopticon disposes an individual's subjectivity, reducing him or her to an object in a one-sided power relationship with those watching. According to Foucault ([17]; p. 201) ‘(…) the major effect of the panopticon [would be] to induce the inmate a state of conscious and permanent visibility that assures the automatic functioning of power’. He argued that through a constant (feeling of) surveillance, people internalise societal norms and values. The panopticon automatises and deinvidualises power, making it invisible and also difficult for people to criticise.

In academia, scholars now refer to the notion of dataveillance, which can be defined as the ‘systematic monitoring of people's actions or communications through the application of information technology’ ([18]; p. 500). Surveillance has become more complex because of contemporary technologies and data streams. Couldry and Mejias [19] even argued how contemporary life is characterised by data colonialism. They explained how personal data is perceived as a freely available resource that can be mined by the social quantification sector: ‘By installing automated surveillance into the space of the self, we risk losing the very thing that constitutes us as selves at all, that is, the open-ended space where we continuously transform over time.’ Indeed, data is not something that is readily available, like any other resource that can be mined, but it is extracted from people and consists of their identity and behaviour [20]. The very essence of the self needs to be reclaimed, according to Couldry and Mejias [19]; for which a decolonisation process is deemed necessary.

Fortunately, in the EU, the General Data Protection Regulation (GDPR) provides privacy and data protection rights to all citizens. These include, among others, the right to be informed of data collection and processing, the right to access one's own data and the rectification and deletion of certain personal data (articles 12–22, GDPR). On a technical level, the privacy standards are high and well developed. For example, the Decentralised Privacy-Preserving Proximity Tracing (DP3T) protocol, developed for limiting the spread of COVID-19, proposes a decentralised design that does not require the collection and processing of most personal data.

But what is still lacking, we argue, is a broader framework that includes the voices of citizens in this process. To overcome the construction of ‘docile bodies’ (cf. Foucault), but also to avoid discussions that label any technology that draws in personal information as privacy invasive, it is necessary to understand citizens' expectations of privacy, or privacy norms, if you will.

2.2. Privacy as ‘secrecy’, ‘control’ and ‘appropriateness’

Privacy, control and access restriction are often mentioned in the same breath. If not control, then secrecy and anonymity are worthy alternatives. A mere focus on control, however, ‘neglects individualism, larger power asymmetries, and responsibilisation processes’ [21]. Indeed, a ‘privacy as control’ discourse presents privacy as an individual responsibility. ‘Privacy as secrecy’ by definition excludes any technology that draws in personal information because, in this regard, sharing personal information is equated with giving up one's privacy.

Nissenbaum [1,14] referred to ‘appropriateness’, rather than ‘control’ or ‘secrecy’, in her Contextual Integrity framework to detect privacy violations and map privacy expectations. In her framework, she linked this appropriateness of information flows with those following information norms. Inappropriate flows of information disturb our sense of privacy. These information flows consist of three parameters: (1) actors, who are those involved in collecting, storing and processing information, such as an application developer or governmental agency; (2) information types, which relate to the type of data that are processed; and (3) transmission principles, which refer to the reason and method of transmitting data. These could play an important role in the evaluation of this contextual appropriateness. Importantly, these informational norms should not be considered as fixed and static; rather, they are time sensitive and vary across points in time and between cultures, locations and societies [1]. The Contextual Integrity framework has been used across numerous contexts, ranging from dataflows in smartphone usage [22,23], education [24,25], healthcare [26] and even recruitment [27].

2.3. Focus of study: automated decision making

In this study, we investigate the contextual integrity of ADM systems that have been suggested and/or used to limit the spread of COVID-19. First, we introduce two cases of ADM in the context of COVID-19 that, to a certain extent, can be defined as surveillance tools, i.e. digital contact tracing and automated triage for hospitals. After introducing these tools, we further operationalise contextual integrity and formulate different hypotheses on the informational norms within and between each context.

2.3.1. Digital contact tracing

Contact tracing is not new and has already been used for limiting the spread of contagious diseases (e.g. tuberculosis, Ebola, HIV) to better map their spread and contain them through treatment or behavioural change (e.g. isolation) [28]. Because of how widespread the ongoing COVID-19 pandemic has been, there is a heightened interest in digital contact tracing, which is supposedly more efficient.

Digital contact tracing uses data available on a smartphone to log the social contacts of an individual. This can be done using precise (e.g. GPS) or coarser (e.g. GSM) location data. However, a lot of enthusiasm has been found for protocols relying on proximity data (e.g. Bluetooth). Among these protocols is the decentralised DP3T [29].

Applications following the DP3T framework are also called exposure notification systems because they do not actually trace social contacts. Rather, they locally store these and notify the user when the ID of a social contact matches the ID of an infected person. Apple and Google co-developed an API for such exposure notification systems. This, in theory, enables cross-border contact tracing between all apps that are based on this API and speeds up the development process of such systems following the DP3T protocol [30]. However, the risks of false positives or false negatives still remain.

2.3.1.1. Contextual integrity in digital contact tracing

When looking at digital contact tracing in the light of contextual integrity, the perceived appropriateness in terms of data flows is dependent on the actors that have access to the data, such as the government or the companies developing the application (i.e. Google, Apple, local company), the kind of smartphone data and how and to what ends the data are processed. We will mainly focus on the privacy norms developed around data use for a particular goal. For digital contact tracing, different kinds of smartphone data can be processed and used. This could be location data (GPS, GSM) or proximity data (Bluetooth). The applications could be used for different types of goals, such as goals with a low individual impact (e.g. advising quarantine), goals with a high individual impact (e.g. limiting access to specific locations) and societal goals (e.g. estimating further spread).

Previous research has confirmed that, depending on the type of the collected smartphone data, respondents are more comfortable sharing their data for a specific goal. They are, for example, more comfortable sharing health-related smartphone data (e.g. sleep logging, physical activity, etc.) than personal data (e.g. location, social activity, etc.) for mental health assessments [22]. In the study of Lin et al. [31]; people perceived the use of GPS data as comfortable when used by an internal app (e.g. a navigation app) or social networking sites, but not comfortable when used for ads. Coarse location information (GSM data) followed the same logic, and its use was, unsurprisingly, considered to be more comfortable than the use of more precise GPS data [31]. Shilton and Martin [23] found contact information (i.e. a phone's contact list) to negatively influence privacy expectations in the context of targeted advertising and smartphone tracking. In a similar fashion, Lin et al. [31] concluded that it is much more comfortable to have contact information used internally (e.g. text message app) or by social network sites than for advertising purposes.

Indeed, for most people, certain smartphone data are only appropriate for use in specific situations and with well-defined goals. We therefore hypothesise that the appropriateness of digital contact tracing will differ depending on the different types of data (H1a) that are used and the app's various goals (H2a).

H1a

There is a significant difference between the appropriateness of using GPS, GSM and Bluetooth data in the context of digital contact tracing.

H2a

There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of digital contact tracing.

2.3.2. Automated triage for hospitals

During their first COVID-19 peaks, many countries considered various technological solutions to mitigate the pressure on their strained healthcare systems and minimise infections in high-risk zones. One of the possible measures was automated triage [11,32,33]. Automated triage systems could be implemented to manage the strain on hospital equipment and intensive care resources. These typically include exclusion criteria and mortality assessment and re-evaluation requirements [11]. Evidently, many ethical dilemmas emerged, i.e. what ethical values should guide the selection of patients, and should we focus on saving lives, saving life years, using random selection, saving the sickest first or saving the youngest first? [34,35].

DDC19 is an example of such a triage system that helps general practitioners collect data and assess the risk of following up with patients during the pandemic [36]. This specific system uses a self-assessment questionnaire to categorise patients. However, different triage systems could use different kinds of information for separate goals; thus, people could evaluate the appropriateness of these systems differently.

2.3.2.1. Contextual integrity in automated triage for hospitals

When applying the Contextual Integrity framework to automated triage, it could be argued how patients perceive these systems to be appropriate, depending on the data that are collected, who has access to it (i.e. doctors, hospitals, government, etc.) and the method and purpose of the data collection and processing. In line with our hypothesis formulated for digital contact tracing, we focus on the types of information that are collected and the intended goals of collecting the information. In automated triage for hospitals, patient data (socio-demographics, medical questionnaires and medical history) and population data (ICU occupancy) could be used as goals with low individual impact (e.g. suggesting going to hospital), high individual impact (e.g. deciding whether someone is allowed inside a hospital) and societal impact (e.g. estimating future ICU occupancy).

Martin and Nissenbaum [37] investigated the use of different types of information (e.g. location, religion) in healthcare and other contexts. They found the type of information, together with the envisioned goal, to be significant for determining privacy expectations. Reasonably, sharing health information with a doctor to diagnose or treat a patient mostly met the privacy expectations. In contrast, for most respondents, sharing information about politics or friends did not meet privacy expectations in the context of this goal. Zimmer et al. [7] found that it was appropriate to share health and physical activity data from a Fitbit device with friends, the manufacturer and a healthcare provider but not with an insurance company or employer. Moreover, it was more acceptable to share some types of information than others. Personal identifiers were, for example, considered inappropriate and would be of concern if shared outside the expected context. Furthermore, Véliz [38] argued that ‘patients should not be forced into giving up more personal information than what is strictly necessary to receive an adequate treatment’ (p.1). Data minimisation is considered a necessity because of the sensitivity of medical data. Also, specifically in triage systems, there has been a longstanding controversy about whether it should be considered appropriate to take into account specific data linked to life expectancy (such as age or comorbidities) when selecting patients [11].

For automated triage, we argue that individuals will only consider it appropriate to use specific data for well-defined goals. We therefore hypothesise that the appropriateness of automated triage for hospitals will differ depending on the types of data that are used (H1b) and the goals of the system (H2b).

H1b

There is a significant difference between the appropriateness of using socio-demographic data (age and gender), a medical questionnaire, medical history and ICU occupancy in the context of automated triage.

H2b

There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of automated triage for hospitals.

2.3.3. Temporal differences in contextual integrity – normative transformation

In the Contextual Integrity framework, Nissenbaum [1] also explained that the norms of the information flows can evolve over time. Shilton and Martin [23] found in their research that consumers' perceptions of privacy would change significantly over the period of a few months. Other longitudinal research has found that respondents would change their privacy-seeking behaviour on Facebook over time [39], possibly indicating a change of their norms for information flows over time. Indeed, these ‘norms are constantly in the process of becoming’ ([40]; p. 210).

Moreover, the COVID-19 pandemic itself could affect what type of data people want to share for a specific goal [41]. Utz et al. [42] found in their research that people would be more willing to use a COVID-19 app in the hope that restrictions would be lifted. They deduced from the measured scepticism towards surveillance that most of their participants would not support the app's extended use in the wake of the pandemic; however, they did not measure these changes directly. It is therefore uncertain how citizens' norms of information flows would change after the pandemic is over. In the context of digital contact tracing that has been specifically developed for the COVID-19 pandemic, we expect a change in perceived appropriateness. Therefore, we hypothesise that people will evaluate the appropriateness of using different types of data for digital contact tracing (H3a) and using digital contact tracing for different types of goals (H3b) during the COVID-19 pandemic to be different compared to after it.

H3a

There is a significant difference between the appropriateness of the use of GPS, GSM and Bluetooth data during the COVID-19 crisis compared to after the COVID-19 crisis in the context of digital contact tracing.

H3b

There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals during the COVID-19 crisis compared to after the COVID-19 crisis in the context of digital contact tracing.

2.4. Digital contact tracing versus automated triage

The context of digital contact tracing differs widely from that of automated triage for hospitals. Each differs in terms of the type of information they use, who has access to the data and how the data are stored and processed. While both contexts differ from each other, they have the same overarching goal: to minimise the spread of COVID-19. Both can have specific goals with a low individual impact (suggesting quarantine or going to hospital), high individual impact (limiting access to specific locations or ICU) and societal goals (estimating further spread or occupancy of an ICU).

Following the reasoning of contextual integrity [1,14], the appropriateness of specific data flows depending on the goals, type of data used, actors involved and transmission principles. As only the goals are comparable over both contexts, and all the other contextual elements are different, we hypothesise that there will be a significant difference between the appropriateness of the same type of goals in the context of digital contact tracing compared to automated triage for hospitals during the COVID-19 pandemic.

H4

There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of digital contact tracing compared to automated triage.

3. Method

3.1. Sample

To answer the research questions, an online questionnaire was launched in April 2020 in collaboration with a news agency in Flanders. The survey ran for two weeks and was distributed through the website and social media platforms of the news agency and sent out to readers in a digital newsletter. A total of 2521 people began the survey; afterwards, a validation check was performed so that only the respondents who finished, correctly answered the control variable and filled in the survey in a credible timeframe were included. The socio-demographics of the invalidated answers did not significantly differ from the validated ones. Of the initial 2521 responses, 1708 valid responses were retained after cleaning and validation. The sample consisted of 40.1% women and 59.9% men. The participants were between 18 and 81 years old (M = 42.81, SD = 13.70). Our sample was highly educated overall, with 71.5% having at least a bachelor's degree, although this could be due to the self-selection of our sample. For the analysis, our sample was weighed to be representative of the population of Flanders in terms of age and gender, with a maximum weight of 2.13 and a minimum weight of 0.73.

3.2. Measures

Perceived appropriateness was operationalised by asking how acceptable respondents thought it would be to use specific data in digital contact tracing and automated triage and how acceptable they thought it would be to use each tool for a specific goal. The respondents were asked to rate items on a 5-point Likert scale. The answer possibilities ranged from (1) totally disagree to (5) totally agree (e.g. ‘To what extent do you agree with the use of GPS in the context of digital contact tracing?‘).

We included the types of information that, in reality, could be used in these specific contexts. For contact tracing, the types of data we sought included personal location information (GPS), coarse personal location information (GSM) and personal contact information (Bluetooth) [33,43]. For automated triage, these data included socio-demographic data (age and gender), a medical questionnaire, medical history and ICU occupancy [11]. In terms of goals, we differentiated between (1) goals with a low individual impact (suggesting quarantine//suggesting going to hospital), (2) goals with a high individual impact (limiting access to specific locations//limiting access to hospital) and (3) societal goals (estimating further spread//estimating further occupancy of the ICU).

To account for the normative transformation, we asked if the respondents would potentially answer the same questions differently if the COVID-19 crisis were over. If they would, we asked the respondents to answer the questions again as if the COVID-19 crisis were over; for those who indicated they would not, the answers were copied. We also measured socio-demographics (age, gender, education level). The operationalization of all our measures is included in appendix A; the means and SD of the variables are found in Table 1 .

Table 1.

Overview of variables, means and SD - PCPost COVID-19.

Context Variable (appropriateness of the use of [Data] for [Goals]) Mean SD
Digital contact tracing during COVID-19 Data GPS 2.99 1.55
GSM 3.38 1.41
Bluetooth 3.16 1.50
Goals Goal with a low individual impact:
Suggestion to do a 14-day home quarantine
3.42 1.40
Goal with a high individual impact:
Limiting access to a location
2.57 1.47
Societal goal:
Making estimations for the spread of COVID-19
3.77 1.37
Digital contact tracing after COVID-19 Data GPSPC 2.54 1.50
GSMPC 2.98 1.42
BluetoothPC 2.65 1.47
Goals Goal with a low individual impact:
Suggestion to do a 14-day home quarantinePC
3.06 1.42
Goal with a high individual impact:
Limiting access to a locationPC
2.21 1.36
Societal goal:
Making estimations for the spread of COVID-19PC
3.51 1.43
Automatic triage for hospitals during COVID-19 Data Gender 2.72 1.48
Age 3.35 1.29
Medical questionnaire 3.37 1.29
Medical history 3.36 1.28
ICU-occupancy 3.31 1.24
Goals Goal with a low individual impact:
The suggestion to go to hospital
3.52 1.15
Goal with a high individual impact:
Deciding whether someone is allowed in the hospital
2.79 1.28
Societal goal:
Making estimations for further ICU-occupancy
3.51 1.17

3.3. Analysis

SPSS software was used to perform the analysis. Paired sample t-tests were performed to investigate differences between the appropriateness of the different types of data and goals to compare them before and after the COVID-19 crisis and to compare categories of data and goals over the different contexts. To evaluate the effect size of the statistical differences, we used Cohen d. Cohen d values of 0.2, 0.5 and 0.8 were considered to be small, medium and large effects, respectively [44].

4. Results

4.1. Contextual integrity contact tracing

4.1.1. Appropriateness of data in digital contact tracing

We found a significant difference between the appropriateness of the different categories of data in the context of contact tracing. GSM data (M = 3.38, SD = 1.406) was considered the most appropriate data to share, and it differed significantly from the least appropriate GPS data (M = 2.99, SD = 1.552; t(1707) = 17.196, p < .001, d = 0.26), with a small to medium effect size. When looking at the appropriateness of Bluetooth data (M = 3.16, SD = 1.497), we found that Bluetooth was considered significantly less appropriate than GSM data (t(1707) = 9.703, p < .001, d = 0.15) but significantly more appropriate than GPS data (t(1707) = 8.095, p < .001, d = 0.11). However, the effect size in both cases was rather small. These significant differences underline that the appropriateness of sharing location and contact information with a digital contact tracing application depends on the type of information that is shared (see Table 2 ). These findings are in line with our hypothesis (H1a).

Table 2.

Appropriateness of the use of data in digital contact tracing - ***p < .001.

Perceived appropriateness of the use of … Mean SD Comparing Δ mean Paired t-test
t-value Sig (two tailed) Cohen d
GPS 2.99 1.55 GPS & GSM 0.396*** 17.196 1707 <.001 0.26
GSM 3.38 1.41 GPS & BT 0.17*** 8.095 1707 <.001 0.11
Bluetooth (BT) 3.16 1.50 BT & GSM 0.227*** 9.703 1707 <.001 0.15

4.1.2. Appropriateness of goals in digital contact tracing

Just as with the appropriateness of the data, there were significant differences between the appropriateness of the different goals for which digital contact tracing can be used. The goal of denying access to a location (M = 2.57, SD = 1.465) was considered least appropriate, and significantly less appropriate than the suggestion of doing a 14-day home quarantine (M = 3.42, SD = 1.388; t(1707) = 30.711, p < .001, d = 0.59), with a medium to large effect size. Estimating the further spread of COVID-19 (M = 3.77, SD = 1.367) was considered by far the most appropriate goal and significantly more appropriate than limiting access to a location (t(1707) = 39.652, p < .001, d = 1.62), with a large effect size, and also significantly more appropriate than the suggestion of doing a 14-day home quarantine (t(1707) = 16.764, p < .001, d = 0.25), with a small to medium effect size (see Table 3 ). This strongly confirms our hypothesis (H2a) that the appropriateness of the goals differs between goals with a low personal impact and a high personal impact.

Table 3.

Appropriateness of goals in digital contact tracing ***p < .001.

Perceived appropriateness to … Mean SD Comparing Δ mean Paired t-test
t-value Df Sig (two tailed) Cohen d
limit access 2.57 1.47 (1) & (2) 0.854*** 30.711 1707 <.001 0.59
advise quarantine 3.42 1.40 (2) & (3) 0.353*** 16.764 1707 <.001 0.25
estimate further spread 3.77 1.37 (3) & (1) 1.206*** 39.652 1707 <.001 1.62

4.1.3. Temporal element in digital contact tracing

Of the respondents, 30.5% indicated that they would have a different view on the use of specific data for digital contact tracing after COVID-19. For specific goals, this was 32.8%. There was a significant difference between the perceived appropriateness of the data and the goals during and after the COVID-19 crisis. This was true for all categories of data and all types of goals (see Table 4, Table 5 ). The effect sizes of all these analyses ranged between small and medium. All data collection and goals were perceived to be less appropriate after the COVID-19 crisis. In addition to the type of data and goals, the temporal element shaped the perceived appropriateness of digital contact tracing, thereby supporting hypotheses H3a, H3b.

Table 4.

Appropriateness of the use of data in digital contact tracing post-corona - PC = Post-COVID-19, ***p < .001.

Perceived appropriateness of the use of … Mean SD Δ mean Paired t-test
t-value Df Sig (two tailed) Cohen d
GPS 2.99 1.55 0.45*** 19.322 1707 <.001 0.30
GPSPC 2.54 1.50
Bluetooth 3.16 1.50 0.51*** 20.878 1707 <.001 0.24
BluetoothPC 2.65 1.47
GSM 3.38 1.41 0.41*** 18.742 1707 <.001 0.28
GSMPC 2.98 1.42
Table 5.

Appropriateness of goals in digital contact tracing post-corona - PC = Post-COVID-19, ***p < .001.

Perceived appropriateness to … Mean SD Δ mean Paired t-test
t-value Df Sig (two tailed) Cohen d
limit access 2.57 1.47 0.36*** 17.105 1707 <.001 0.25
limit accessPC 2.21 1.36
Advise quarantine 3.42 1.39 0.36*** 17.539 1707 <.001 0.26
Advise quarantinePC 3.06 1.42
Estimate further Spread 3.77 1.37 0.26*** 14.062 1707 <.001 0.19
Estimate further SpreadPC 3.51 1.43

4.2. Contextual integrity in automated triage

4.2.1. Appropriateness of data in automated triage

In the context of the automated triage systems, the differences between the appropriateness of sharing different personal data were much less apparent. When comparing the appropriateness of sharing personal medical information such as personal information from a questionnaire (M = 3.37, SD = 1.29), medical history (M = 3.36, SD = 1.28) or age (M = 3.35, SD = 1.29), there are no significant differences to be found except for gender (M = 2.72, SD = 1.48). Gender was considered significantly less appropriate for use in an automated triage system than any of the other personal data, with a medium to large effect size (see Table 6 ).

Table 6.

Appropriateness of the use of data compared to gender in automated triage for hospitals - ***p < .001.

Perceived appropriateness of the use of … Mean SD Δ mean with Gender Paired t-test
t-value Df Sig (two tailed) Cohen d
Gender 2.72 1.48
Age 3.35 1.29 0.63*** 23.713 1707 <.001 0.45
Medical questionnaire 3.37 1.29 0.65*** 20.543 1707 <.001 0.47
Medical history 3.36 1.28 0.64*** 21.221 1707 <.001 0.46

The difference between the appropriateness of using ICU occupancy (M = 3.31, SD = 1.25) and other personal medical information was statistically significant, but the effect size was considered very low (Cohen d < 0.05) and could thus be interpreted as marginally low, except for the difference with gender (t(1707) = 17.953, p < .001), whereby the effect size was considered medium to large (see Table 7 ). Gender was thus considered a more sensitive type of data that was less appropriate for use in such systems. The other types of data were considered more appropriate. This only partially confirms hypothesis H1b.

Table 7.

Appropriateness of the use of data compared to ICU occupancy in automated triage for hospitals - *p < .01, **p < .05, ***p < .001.

Perceived appropriateness of the use of … Mean SD Δ mean with ICU occupancy Paired t-test
t-value Df Sig (two tailed) Cohen d
ICU occupancy 3.31 1.24
Gender 2.72 1.48 .586*** 17.953 1707 <.001 .43
Age 3.35 1.29 0.05* 1.834 1707 .067 0.04
Medical questionnaire 3.37 1.29 0.06** 2.302 1707 .021 0.05
Medical history 3.36 1.28 0.05** 1.998 1707 .046 0.04

4.2.2. Appropriateness of goals in automated triage

The appropriateness of using automated triage to suggest people to go to hospital (M = 3.52, SD = 1.15) was considered much more appropriate than deciding whether someone should be allowed into hospital (M = 2.79, SD = 1.28; t(1707) = 27.565, p < .001, d = 0.60), with a medium to large effect size. Making estimations for future ICU occupancy (M = 3.51, SD = 1.17) was also considered significantly more appropriate than the decision to allow someone into hospital (t(1707) = 26.616, p < .001, d = 0.59), with a comparable effect size. There was no significant difference between the appropriateness of making an individual suggestion and making an estimation of the occupancy of the ICU (see Table 8 ). The goal with a low individual impact and the societal goal were thus considered much more appropriate than the goal with a high individual impact. This is in line with hypothesis H2b.

Table 8.

Appropriateness of goals in automated triage for hospitals - (1): The suggestion to go to the hospital, (2) deciding whether someone is allowed in hospital, (3) making estimations for future ICU occupancy, ***p < .001.

Perceived appropriateness of … Mean SD Comparing Δ mean Paired t-test
t-value Df Sig (two tailed) Cohen d
(1) 3.52 1.15 (1) & (2) 0.731*** 27.565 1707 <.001 0.60
(2) 2.79 1.28 (2) & (3) 0.726*** 26.616 1707 <.001 0.59
(3) 3.51 1.17 (3) & (1) 0.006 0.249 1707 0.804

4.3. Digital contact tracing vs automated triage for hospitals

When comparing the context of digital contact tracing with automated triage, there was a small but significant difference to be found when looking at the goals in closer detail (see Table 9 ). For the societal impact (i.e. estimating the further spread of COVID-19 and occupancy of the ICU) (t(1707) = 8.011, p < .001, d = 0.20), there was a significant difference with a small effect size, with the societal impact in digital contact tracing being more appropriate than in automated triage for hospitals. For the goals with a high individual impact (i.e. limiting access to specific locations and limiting access to a hospital) (t(1707) = 6.216, p < .001, d = 0.16), there was also a significant difference, with the effect size approaching the threshold for a small effect. However, for the goal with a high individual impact, automated triage was perceived to be more appropriate than digital contact tracing. For the goal with a low individual impact (suggesting quarantine and suggesting going to hospital), the difference was significant (t(1707) = 3.385, p < .001, d = 0.08), but the effect size was negligibly small. These results partially confirm hypothesis H4.

Table 9.

Appropriateness of goals in digital contact tracing compared to goals in automated triage for hospitals ***p < .001.

Perceived appropriateness of … Mean SD Δ mean Paired t-test
t-value Df Sig (two tailed) Cohen d
A low individual impact (digital contact tracing) 3.42 1.39 .101*** 3.385 1707 <.001 .08
A low individual impact (automated triage) 3.52 1.15
A high individual impact (digital contact tracing) 2.57 1.47 .224*** 6.216 1707 <.001 .16
A high individual impact (Automated triage) 2.79 1.28
A societal impact (digital contact tracing) 3.77 1.37 .257*** 8.011 1707 <.001 .20
A societal impact (automated triage) 3.51 1.17

4.4. Overview of hypotheses and outcomes

(see Table 10 ).

Table 10.

Overview of hypotheses and outcomes.

Context Hypothesis Outcome
Digital contact tracing H1a: There is a significant difference between the appropriateness of using GPS, GSM and Bluetooth data in the context of digital contact tracing. V
H2a: There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of digital contact tracing. V
H3a: There is a significant difference between the appropriateness of the use of GPS, GSM and Bluetooth data during the COVID-19 crisis compared to after the COVID-19 crisis in the context of digital contact tracing. V
H3b: There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals during the COVID-19 crisis compared to after the COVID-19 crisis in the context of digital contact tracing. V
Automated triage for hospitals H1b: There is a significant difference between the appropriateness of using socio-demographic data (age and gender), a medical questionnaire, medical history and ICU occupancy in the context of automated triage. V (partially)
H2b: There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of automated triage for hospitals. V
Digital contact tracing vs Automatic triage for hospitals H4: There is a significant difference between the appropriateness of goals with a low individual impact, a high individual impact and societal goals in the context of digital contact tracing compared to automated triage. V (partially)

5. Conclusion and discussion

Governments all over the world have been employing various measures to minimise the spread of COVID-19. To a certain extent, some of these measures can be defined as surveillance tools, as they automate the logging of social contacts (digital contact tracing) or control flows of people (automated triage for hospitals). The appropriateness of these measures is most often evaluated by legal and technical experts. Arguably, they decide the informational norms that citizens should abide by and what measurements are considered appropriate. In this study, we have argued how it is necessary to amplify the voices of citizens impacted by these technologies and understand their expectations of privacy or privacy norms. This requires a move beyond the more traditional approach of privacy as secrecy or control to a focus on contextual integrity (cf [1].

In this research, we looked at how citizens evaluated the appropriateness of possible information flows in digital contact tracing and automated triage for hospitals. We focused on the different kinds of data and various possible goals in the context of these tools. When zooming in on specific cases, we noticed some significant differences in terms of the appropriateness of the types of data collected and the types of goals for which these data would be collected, processed and used.

For digital contact tracing, we found a significant difference between the appropriateness of the different kinds of data that could be used in this application. Unsurprisingly, the more personal and intrusive the data, the less appropriate it was considered to be. Very precise GPS data, for example, was evaluated as less appropriate than less precise GSM data, in line with the findings of Lin et al. [31]. Looking at the intended goals for digital contact tracing, there was also a significant difference between goals that have an individual or societal impact. Goals with an individual impact were considered significantly less appropriate than goals with a societal impact. Also, the goal with a high individual impact was considered significantly less appropriate than the goal with a low individual impact. These findings are in line with the results of Nicholas et al. [22] and Martin (2013).

Similar results were found for the goals of automated triage in hospitals. People differentiated between the various goals that are appropriate to pursue with automated triage. Deciding whether to allow someone into a hospital was considered by far the least appropriate goal. However, these triage systems are mostly used to minimalise the load on the healthcare system and are thus mostly used to manage flows of people to hospitals and, by extension, the ICU. These insights confirm the research of Martin and Nissenbaum [37] and expose the delicacy of such systems.

In terms of data that are used for automated triage, it was surprising that only gender was considered significantly less appropriate than any of the other information types. The lower perceived appropriateness of using gender in these systems could potentially be related to the higher mortality and severity of the disease with men [45]. Age, however, was not considered to be less appropriate, partially contradicting the controversy coined by Joebges and Biller-Andorno [11] about using specific data linked to life expectancy in triage systems. This could, however, be due to the low representation of people aged 65+ in our sample (11%). People could have primarily evaluated the appropriateness of using these different kinds of data with themselves and their own health in mind.

Overall, the results also indicated a weak, but significant, difference when comparing similar goals between digital contact tracing and automated triage in hospitals during COVID-19. The appropriateness of the data flows is thus not only dependent on the goal for which the data are shared, it possibly also depends on the actors and transmission principles [1].

Moreover, we found proof for a normative transformation regarding digital contact tracing. Our results show that people believe that their perceived appropriateness for the use of specific data and for specific goals in digital contact tracing will decrease after the pandemic. These insights are in line with the expectations voiced in the study of Utz et al. [42]. In their study, the scholars expected participants not to support the extended use of COVID-19 apps after the pandemic because of their surveillance scepticism. This could mean that even with high privacy concerns, people are willing to share more personal data in specific cases, but also that people are only temporarily allowing more data to be used in these extraordinary times. However, Vitak and Zimmer ([7]; p.1) did voice their fear that in a global crisis like this, ‘there is a risk that temporary measures established during a crisis become permanent and reduce citizens privacy’.

5.1. Limitations, strengths and weaknesses

We consider the contextual approach to be a major strength of our research. However, as privacy is culturally and contextually dependent, it is difficult to generalise our results beyond their empirical context (i.e. Flanders). It should be noted that this research was conducted in a Western European country in a time when a first surge of the COVID-19 pandemic was just starting to slow.

We focused in our study on types of information and goals in two different contexts (digital contact tracing and automated triage), thus including contexts with different actors and transmission principles. However, because of the limitations in length of our survey, we could not include direct questions concerning the appropriateness of actors and other transmission principles.

Overall, our approach further facilitates a granular understanding on the appropriateness of ADM. In our study design, we measured how appropriate citizens evaluate specific elements of the context to be. Our approach is supplementary to vignette studies that elicit respondents’ views towards a series of scenarios (cf [37].).

5.2. Recommendations and suggestions for future research

The insights of our study suggest that transparent communication on the underlying ideas and goals is necessary when digital contact tracing and automated triage systems are rolled out with the expectation that citizens share their personal data. People are clearly not indifferent to what data are used and for what goals. However, they are willing to share personal data if they consider the specific context appropriate in terms of data and goals.

In future research, different dimensions of contextual integrity and privacy norms with regard to ADM remain to be studied. To enable comparative analyses, future research could put more emphasis on the temporal dimension of ADM and question the appropriateness of automated triage after COVID-19. This way, we could compare the normative evolution of the appropriateness of digital contact tracing with automated triage. As automated triage is embedded in the healthcare system and already collects, processes and uses most of the data present in automated triage, it would be interesting to see if and how appropriateness would change after COVID-19. Other comparative research might also include multiple different cultural contexts.

Moreover, the individual from which the data are shared is considered to be essential as an actor in the Contextual Integrity framework. The differences between people (e.g. socio-demographic, personality and context-relevant characteristics) could thus also be highlighted in future research. In the context of contact-tracing apps, this could, for example, include how concerned an individual is about COVID-19.

Finally, future research endeavours could implement a hybrid methodology by combining the in-depth insights of the factors of contextual integrity through a survey (i.e. how appropriate they evaluate each element; actor, data, transmission principle) and comparing different contexts using vignetting (cf [23,42]. This would allow a more in-depth and broader insight.

Author statement

Marijn Martens: Conceptualization, Methodology, Investigation, Formal analysis, Writing – original draft, Writing – review & editing, Funding acquisition, Ralf De Wolf: Conceptualization, Methodology, Writing – review & editing, Funding acquisition, Supervision, Project administration, Karel Vandendriessche: Conceptualization, Methodology, Investigation, Writing – review & editing, Tom Evens: Writing – review & editing, Supervision, Project administration, Lieven De Marez: Resources, Supervision, Project administration, Funding acquisition.

Funding

This work was supported by the FWO (Grant nr. 11F6819N). The funding body had no involvement in the study design, the collection, analysis and interpretation of data, the writing of the report or the decision to submit the article for publication.

Acknowledgements

Each author has contributed to the work and agrees to the submission of this manuscript. The manuscript is not currently being considered for publication by any other print or electronic journal.

Appendices.

Appendix A – Operationalization of constructs

Construct Survey-item Answer options
1. Age In what year were you born? [open question]
2. Gender What is the gender on your passport?
  • a.

    Male

  • b.

    Female

3. Education degree What is your highest degree of education?
  • a.

    No degree

  • b.

    Lower primary education

  • c.

    Lower secondary education

  • d.

    Higher secondary education

  • e.

    Bachelor

  • f.

    Master and PhD

4. Digital contact tracing - Data To what extent do you agree with the use of […] for digital contact tracing
  • a.

    GPS data

  • b.

    GSM data

  • c.

    Bluetooth data

5-point likert scale
Totally disagree – totally agree
5. Digital contact tracing Pre vs Post - Data Would you answer previous question differently after the COVID-19 crisis?
  • a.

    I would answer the same

  • b.

    I would answer differently

6. Digital contact tracing post-corona - Data Only shown IF answered “I would answer differently” on 5
How would you answer this question after the COVID-19 crisis?
To what extent do you agree with the use of […] for digital contact tracing
  • a.

    GPS data

  • b.

    GSM data

  • c.

    Bluetooth data

5-point likert scale
Totally disagree – totally agree
7. Digital contact tracing - Goals To what extent do you agree with the use of digital contact tracing for […]
  • a.

    Suggesting to do a 14-day home quarantine

  • b.

    Limiting access to a location (e.g. a bar, cinema, bus …)

  • c.

    Making estimations on the spread of COVID-19

5-point likert scale
Totally disagree – totally agree
8. Digital contact tracing Pre vs Post - Goals Would you answer previous question differently after the COVID-19 crisis? a. I would answer the same
b. I would answer differently
9. Digital contact tracing Post-corona - Goals Only shown IF answered “I would answer differently” on 8
How would you answer this question after the COVID-19 crisis?
To what extent do you agree with the use of digital contact tracing for […]
  • a.

    Suggesting to do a 14-day home quarantine

  • b.

    Limiting access to a location (e.g. a bar, cinema, bus …)

  • c.

    Making estimations on the spread of a virus

5-point likert scale
Totally disagree – totally agree
10. Automatic triage for hospitals - Data To what extent do you agree with the use of […] for automatic triage in hospitals
  • a.

    Medical questionnaire

  • b.

    Age

  • c.

    Gender

  • d.

    Medical history

  • e.

    Occupation of the ICU

5-point likert scale
Totally disagree – totally agree
11. Automatic triage for hospitals – Goals To what extent do you agree with the use of automatic triage for hospitals for […]
  • a.

    Suggesting to go to the hospital

  • b.

    Deciding whether someone is allowed in the hospital

  • c.

    Making estimations for further ICU-occupancy

5-point likert scale
Totally disagree – totally agree

References

  • 1.Nissenbaum H. Privacy as contextual integrity. Wash. Law Rev. 2004;79:41. [Google Scholar]
  • 2.Poom A., Järv O., Zook M., Toivonen T. COVID-19 is spatial: ensuring that mobile Big Data is used for social good. Big Data & Society. 2020;7(2):1–7. doi: 10.1177/2053951720952088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Zimmerling A., Chen X. Innovation and possible long-term impact driven by COVID-19: manufacturing, personal protective equipment and digital technologies. Technol. Soc. 2021;65:101541. doi: 10.1016/j.techsoc.2021.101541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Algorithm Watch . 2019. Taking Stock of Automated Decision-Making in the EU (Automating Society, P. 143). Algorithm Watch and Bertelsmann Stiftung.https://algorithmwatch.org/wp-content/uploads/2019/02/Automating_Society_Report_2019.pdf [Google Scholar]
  • 5.Cho H., Ippolito D., Yu Y.W. 2020. Contact Tracing Mobile Apps for COVID-19: Privacy Considerations and Related Trade-Offs.http://arxiv.org/abs/2003.11511 ArXiv:2003.11511 [Cs] [Google Scholar]
  • 6.König P.D., Wenzelburger G. The legitimacy gap of algorithmic decision-making in the public sector: why it arises and how to address it. Technol. Soc. 2021;67:101688. doi: 10.1016/j.techsoc.2021.101688. [DOI] [Google Scholar]
  • 7.Zimmer M., Kumar P., Vitak J., Liao Y., Chamberlain Kritikos K. ‘There's nothing really they can do with this information’: unpacking how users manage privacy boundaries for personal fitness information. Inf. Commun. Soc. 2020;23(7):1020–1037. doi: 10.1080/1369118X.2018.1543442. [DOI] [Google Scholar]
  • 8.Clark L.A., Clark W.J., Jones D.L. Innovation policy vacuum: navigating unmarked paths. Technol. Soc. 2011;33(3–4):253–264. doi: 10.1016/j.techsoc.2011.09.004. [DOI] [Google Scholar]
  • 9.Gasser U., Ienca M., Scheibner J., Sleigh J., Vayena E. 2020. Digital Tools against COVID-19: Framing the Ethical Challenges and How to Address Them.http://arxiv.org/abs/2004.10236 ArXiv:2004.10236 [Cs] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sætra H.S. Privacy as an aggregate public good. Technol. Soc. 2020;63:101422. doi: 10.1016/j.techsoc.2020.101422. [DOI] [Google Scholar]
  • 11.Joebges S., Biller-Andorno N. Ethics guidelines on COVID-19 triage—an emerging international consensus. Crit. Care. 2020;24(1):201. doi: 10.1186/s13054-020-02927-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Morley J., Cowls J., Taddeo M., Floridi L. 2020. Ethical Guidelines for SARS-CoV-2 Digital Tracking and Tracing Systems. Available at SSRN 3582550. [DOI] [PubMed] [Google Scholar]
  • 13.Scantamburlo T., Cortés A., Dewitte P., Van der Eycken D., De Wolf R., Martens M. Health and Technology; 2021. Covid-19 and Tracing Methodologies: A Lesson for the Future Society. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Nissenbaum H. Stanford University Press; 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. [Google Scholar]
  • 15.Van Dijck J. Datafication, dataism and dataveillance: big Data between scientific paradigm and ideology. Surveill. Soc. 2014;12(2):197–208. doi: 10.24908/ss.v12i2.4776. [DOI] [Google Scholar]
  • 16.Foucault M. Surveiller et punir. Population (Paris) 1975;1:192–211. [Google Scholar]
  • 17.Foucault M. Trans. Alan Sheridan.; New York: 1995. Discipline and Punish: the Birth of the Prison. 1975; p. 977. Vintage, 1. [Google Scholar]
  • 18.Clarke R. Information technology and dataveillance. Commun. ACM. 1988;31(5):498–512. doi: 10.1145/42411.42413. [DOI] [Google Scholar]
  • 19.Couldry N., Mejias U.A. Data colonialism: rethinking big data's relation to the contemporary subject. Televis. N. Media. 2019;20(4):336–349. doi: 10.1177/1527476418796632. [DOI] [Google Scholar]
  • 20.Sadowski J. When data is capital: datafication, accumulation, and extraction. Big Data & Society. 2019;6(1):1–12. doi: 10.1177/2053951718820549. [DOI] [Google Scholar]
  • 21.De Wolf R., Joye S. Control responsibility: the discursive construction of privacy, teens, and Facebook in flemish newspapers. Int. J. Commun. 2019;13:1–20. [Google Scholar]
  • 22.Nicholas J., Shilton K., Schueller S.M., Gray E.L., Kwasny M.J., Mohr D.C. The role of data type and recipient in individuals' perspectives on sharing passively collected smartphone data for mental health: cross-sectional questionnaire study. JMIR MHealth and UHealth. 2019;7(4) doi: 10.2196/12578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Shilton K., Martin K.E. Mobile privacy expectations in context. The 41st research Conference on communication, Information and internet policy. TPRC. 2013;41 doi: 10.2139/ssrn.2238707. [DOI] [Google Scholar]
  • 24.Birnhack M., Perry-Hazan L. School surveillance in context: high school students' perspectives on CCTV, privacy, and security. Youth Soc. 2020;52(7):1312–1330. [Google Scholar]
  • 25.Jones K.M.L., Asher A., Goben A., Perry M.R., Salo D., Briney K.A., Robertshaw M.B. “We’re being tracked at all times”: student perspectives of their privacy in relation to learning analytics in higher education. Journal of the Association for Information Science and Technology. 2020;71(9):1044–1059. doi: 10.1002/asi.24358. [DOI] [Google Scholar]
  • 26.Winter J.S., Davidson E. Big data governance of personal health information and challenges to contextual integrity. Inf. Soc. 2019;35(1):36–51. doi: 10.1080/01972243.2018.1542648. [DOI] [Google Scholar]
  • 27.Backman C., Hedenus A. Online privacy in job recruitment processes? Boundary work among cybervetting recruiters. New Technol. Work. Employ. 2019;34(2):157–173. doi: 10.1111/ntwe.12140. [DOI] [Google Scholar]
  • 28.Eames K.T.D., Keeling M.J. Contact tracing and disease control. Proc. Roy. Soc. Lond. B Biol. Sci. 2003;270(1533):2565–2571. doi: 10.1098/rspb.2003.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Troncoso C., Payer M., Hubaux J.-P., Salathé M., Larus J., Bugnion E., Lueks W., Stadler T., Pyrgelis A., Antonioli D., Barman L., Chatel S., Paterson K., Čapkun S., Basin D., Beutel J., Jackson D., Roeschlin M., Leu P., Pereira J. 2020. Decentralized Privacy-Preserving Proximity Tracing.http://arxiv.org/abs/2005.12273 ArXiv:2005.12273 [Cs] [Google Scholar]
  • 30.Ahmed N., Michelin R.A., Xue W., Ruj S., Malaney R., Kanhere S.S., Seneviratne A., Hu W., Janicke H., Jha S.K. A survey of COVID-19 contact tracing apps. IEEE Access. 2020;8:134577–134601. doi: 10.1109/ACCESS.2020.3010226. [DOI] [Google Scholar]
  • 31.Lin J., Liu B., Sadeh N., Hong J.I. 2014. Modeling Users' Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings; pp. 199–212. [Google Scholar]
  • 32.Reeves J.J., Hollandsworth H.M., Torriani F.J., Taplitz R., Abeles S., Tai-Seale M., Millen M., Clay B.J., Longhurst C.A. Rapid response to COVID-19: health informatics support for outbreak management in an academic health system. J. Am. Med. Inf. Assoc. 2020;27(6):853–859. doi: 10.1093/jamia/ocaa037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ting D.S.W., Carin L., Dzau V., Wong T.Y. Digital technology and COVID-19. Nat. Med. 2020;26(4):459–461. doi: 10.1038/s41591-020-0824-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Emanuel E.J., Persad G., Upshur R., Thome B., Parker M., Glickman A., Zhang C., Boyle C., Smith M., Phillips J.P. Fair allocation of scarce medical resources in the time of covid-19. N. Engl. J. Med. 2020;382(21):2049–2055. doi: 10.1056/NEJMsb2005114. [DOI] [PubMed] [Google Scholar]
  • 35.White D.B., Lo B. A framework for rationing ventilators and critical care beds during the COVID-19 pandemic. J. Am. Med. Assoc. 2020;323(18):1–2. doi: 10.1001/jama.2020.5046. [DOI] [PubMed] [Google Scholar]
  • 36.Liu Y., Wang Z., Ren J., Tian Y., Zhou M., Zhou T., Ye K., Zhao Y., Qiu Y., Li J. A COVID-19 risk assessment decision support system for general practitioners: design and development study. J. Med. Internet Res. 2020;22(6) doi: 10.2196/19786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Martin K., Nissenbaum H. Measuring privacy: an empirical test using context to expose confounding variables. Colum. Sci. & Tech. L. Rev. 2016;18:176. [Google Scholar]
  • 38.Véliz C. Not the doctor's business: privacy, personal responsibility and data rights in medical settings. Bioethics. 2020;34(7) doi: 10.1111/bioe.12711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Stutzman F., Gross R., Acquisti A. Silent listeners: the evolution of privacy and disclosure on Facebook. Journal of Privacy and Confidentiality. 2013;4(2) doi: 10.29012/jpc.v4i2.620. [DOI] [Google Scholar]
  • 40.Shvartzshnaider Y., Tong S., Wies T., Kift P., Nissenbaum H., Subramanian L., Mittal P. AAAI Publications; 2016. Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms; pp. 209–218. [Google Scholar]
  • 41.Nabity-Grover T., Cheung C.M.K., Thatcher J.B. Inside out and outside in: how the COVID-19 pandemic affects self-disclosure on social media. Int. J. Inf. Manag. 2020;55:1–5. doi: 10.1016/j.ijinfomgt.2020.102188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Utz C., Becker S., Schnitzler T., Farke F.M., Herbert F., Schaewitz L., Degeling M., Dürmuth M. 2020. Apps against the Spread: Privacy Implications and User Acceptance of COVID-19-Related Smartphone Apps on Three Continents.http://arxiv.org/abs/2010.14245 ArXiv:2010.14245 [Cs] [Google Scholar]
  • 43.Algorithm Watch . 2020. Automated Decision-Making Systems in the COVID-19 Pandemic: A European Perspective (Automating Society Report 2020, P. 34). Algorithm Watch.https://algorithmwatch.org/wp-content/uploads/2020/08/ADM-systems-in-the-Covid-19-pandemic-Report-by-AW-BSt-Sept-2020.pdf [Google Scholar]
  • 44.Rice M.E., Harris G.T. Comparing effect sizes in follow-up studies: ROC Area. Cohen’s d, and r. Law and Human Behavior. 2005;29(5):615–620. doi: 10.1007/s10979-005-6832-7. [DOI] [PubMed] [Google Scholar]
  • 45.Jin J.-M., Bai P., He W., Wu F., Liu X.-F., Han D.-M., Liu S., Yang J.-K. Gender differences in patients with COVID-19: focus on severity and mortality. Frontiers in Public Health. 2020;8:152. doi: 10.3389/fpubh.2020.00152. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Technology in Society are provided here courtesy of Elsevier

RESOURCES