Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2022 Feb 18;64(2):247–264. doi: 10.1007/s12599-021-00734-8

Values and Ethics in Information Systems

A State-of-the-Art Analysis and Avenues for Future Research

Sarah Spiekermann 1,, Hanna Krasnova 2, Oliver Hinz 3, Annika Baumann 4, Alexander Benlian 5, Henner Gimpel 6, Irina Heimbach 7, Antonia Köster 2, Alexander Maedche 8, Björn Niehaves 9, Marten Risius 10, Manuel Trenz 11
PMCID: PMC8855649

Motivation

Sarah Spiekermann, Hanna Krasnova and Oliver Hinz

In late 2019 about a dozen BISE chairs from the German-speaking community met around ICIS to discuss the ethical challenges arising from the current construction, deployment, and marketing of Information Systems (IS). It turned out that many were and are concerned about the negative implications of IS while at the same time being convinced that digitization also supports society for the better. The questions at hand are what the BISE community is contributing in terms of solutions to the societal challenges caused by IS, how it should handle politically and socially ambiguous developments (i.e., when teaching students), and what kind of relevant research questions should be addressed. In the aftermath of the initial get-together, an online retreat took place in the late summer of 2020, during which all colleagues presented their current research projects. It turned out that BISE scholars have a very strong interest and track record in this area, and consequently, the plan was born to publish this discussion paper as well as a BISE Special Issue dedicated to the issues of “Technology for Humanity” (Spiekermann-Hoff et al. 2021).

In the following, 12 colleagues interested in this community effort have contributed their reflections and viewpoints on fostering technology in humanity’s interest. Hence, this discussion paper is a collection of individual views and contributions. Starting from the design perspective, Alexander Maedche reminds us that one of the core interests of IS is to improve the well-being of users, and describes how he and his team are using machine learning techniques to support the adaptiveness of IS. He notes, however, that at a higher level of abstraction, well-being is a broad concept. Hence, “when designing IS for well-being it is not straightforward to define the actual design goal and measure specific well-being outcomes.” The question of design goals is the one that many scholars in the field of ethical and social computing may seek to answer from the standpoint of human values. Values are conceptions of the desirable and principles of the ought-to-be that can and should be identified in the early phases of system requirements analysis (as well as business model development). In her contribution, Sarah Spiekermann argues that these values can be the “design goals” sought for humanity. Hence, IS innovators should strive to foster positive values through solutions beyond technical quality (e.g., reliability or security) and the achievement of economic goals. Examples are the values of health, trust, and transparency that some BISE colleagues work on and present here. Friendship, dignity, knowledge, and freedom are other high intrinsic values that are worth protecting. However, they are currently undermined by some instances of IS which instead provide a breeding ground for hate speech and fake news, which fuel envy, limit human autonomy, and expose users to surveillance capitalism.

Building on the idea of value-based system design advanced by Alexander Maedche and Sarah Spiekermann, the following contributions describe the values that the authors deem important in their work and on which they have already published extensively. In particular, health (Alexander Benlian and Henner Gimpel), trust (Annika Baumann and Björn Niehaves), and transparency (Irina Heimbach, Oliver Hinz, and Marten Risius) are discussed. These individual papers define the problem space of each of these values, give hints to relevant literature sources, and outline research questions that they believe are worth tackling.

In the next step, four contributions address the grand value-related challenges of an IT-enabled society: Alexander Benlian and Henner Gimpel outline how the “gig economy” can lead to social challenges and value destruction in digitally transformed work environments. Manuel Trenz presents the challenges surrounding surveillance capitalism. He argues that IS researchers should be at the forefront of guiding and monitoring the development of ethical personal data markets, informing regulatory bodies and facilitating an informed, consent-based release and use of personal data for the social good. Antonia Köster and Marten Risius describe what happens when data is used for voter manipulation and targeting. They further describe the processes that empower online extremism. Finally, Annika Baumann, Irina Heimbach, and Hanna Krasnova end this discussion paper by reminding us that we are seeing an evolutionarily influential transition of human beings into “digitized individuals.” Despite an array of positive implications, this transition also implies changes in individual behavior and perceptions about oneself, others, and the world at large, which can be unintended and potentially detrimental. Beyond personal harm, adversarial micro-changes at an individual level may accumulate and ultimately “collectively contribute to major issues affecting society at large.”

Designing Information Systems for Well-being

Alexander Maedche

“Ensuring healthy lives and promoting well-being for all at all ages” is the third United Nations Sustainable Development Goal. Health is not only defined here by the absence of illness or diseases but also considers physical, psychological, and social factors linked to well-being. Well-being is a complex, multi-dimensional construct and is grounded in different schools of thought: First, the subjective well-being perspective follows a hedonic approach and emphasizes happiness, positive emotions, and the absence of negative emotions, as well as life satisfaction (Diener 1984; Diener et al. 1999; Kahneman et al. 1999). Second, the eudaimonic perspective on well-being draws on Aristotle’s definition of happiness as being in accordance with virtue. Thus, eudaimonic well-being focuses on optimal psychological functioning through experience, development, and having a meaningful life (Ryff and Keyes 1995; Ryan and Deci 2001). Third, these two core perspectives can be complemented by a social dimension of well-being that emphasizes such aspects as social acceptance, contribution, and integration (Keyes 1998).

With the rapid digitalization of all areas of life and work, designing IS for well-being has become increasingly important. However, in this context, IS should be seen as a double-edged sword: they can have positive as well as negative impacts on individual well-being. For example, online games or streaming services aim at triggering positive emotions and user experiences (UX), potentially contributing to hedonic well-being. Furthermore, these services enable new forms of social connectedness that may contribute to social well-being. Modern IS in the workplace follow the same or similar principles. They enable the virtualization of work independent of time and space, personal development, and globally connected employee networks. Thus, one may argue that IS are a key facilitator of well-being in the workplace and at home. However, the underlying business model of digital service providers for private life consumption is often advertisement-based and therefore focuses on maximizing user attention, use, and time on site. Reflecting on this development, scholars have called for attention to be treated as a scarce commodity (Davenport and Beck 2001). Similarly, virtualized workplaces erase previous boundaries between work and private life and enable 24/7 availability of the workforce. Furthermore, multi-tasking and overuse of IS in private and work life can lead to a loss of autonomy and control, to stress, or even to an addiction. IS, then, can have negative impacts on well-being.

Against this background, designing for well-being has received increasing attention in research in the last decade. Beyond accessibility, usability, and UX, well-being oriented design has established itself as an important criterion of a “good design” (Calvo and Peters 2014) in the Human–Computer Interaction (HCI) field. Following the positive psychology paradigm, research streams such as “positive technology” or “positive computing” have encouraged the investigation of technology designs for well-being. In parallel, the commercial market of well-being technology devices in different forms (apps, wearables, etc.) is growing rapidly. Well-being features–e.g., managing time spent, notification blockers–are increasingly added as core capabilities of IS used in the workplace and at home.

Designing IS for well-being can follow two complementary strategies: First, well-being can be increased through behavior changes of users by means of digital intervention designs. Self-tracking can help in understanding current behavior and the corresponding well-being states. On this basis, positive psychology interventions that have proved themselves able to positively influence well-being (Bolier et al. 2013) can be realized in the form of digital interventions. Second, IS can adapt to prevent negative outcomes on well-being during use. User-adaptive IS are a class of IS where the interaction with users is based on monitoring, analyzing, and responding to user activity in real-time and over longer periods of time. The underlying idea is that huge amounts of data about the users themselves, their tasks and contexts, are collected using different types of sensor technology. User activity is captured by sensors, e.g., in the form of electrocardiography (ECG) signals which are collected through wearable technology or eye-movement signals captured by eye-tracking technology. The collected data is then processed using machine learning techniques in order to automatically detect the affective-cognitive states of users; individualized user-centered IS adaptations can be designed on this basis. One example is intelligent notification management through dynamic notification adaptations, which may be triggered based on the analysis of user, task and context data collected by sensors. In the recently completed research project “Kern”, funded by the German Ministry for Work and Social Affairs, we investigated the design of flow-adaptive notification systems for the workplace. In a first step, the flow was predicted based on ECG signals in combination with self-reported subjective data using supervised machine learning. Subsequently, the flow classifier was leveraged to design a flow-adaptive notification system to protect employees from incoming messages during flow states in real time. The field experiment with 30 employees using the system in a (home-)office environment has delivered promising results (see Rissler et al. 2020).

To conclude, it is important to emphasize that when designing IS for well-being, it is far from a straightforward task to define the actual design goal and measuring specific well-being outcomes. In light of this, it is first of all important to clearly conceptualize and break down the broad well-being concept into more specific constructs in order to clarify the nomological network. In addition, one has to be clear about whether the goal is to change user behavior or to adapt the IS to the existing behavior. Finally, in order to successfully design IS for well-being, it is necessary to involve all relevant stakeholders, ranging from users, designers and developers, to companies that provide and/or use technology, as well as governance actors in the society. With users’ well-being a central priority, the existing business models of digital service providers need to be challenged and new legal boundaries enforcing specific designs should be considered. Moreover, since the design of user-adaptive IS requires access to privacy-sensitive data that may conflict with other human values, designing for health and well-being needs to become the subject of a broader public debate on societal values and their prioritization. The journey towards designing IS for well-being in work and private spheres has just started–and we still have a long way to go.

Value-based Engineering for Human Well-being

Sarah Spiekermann

An important way to work towards human and social well-being in system design is to construct systems in a more ethical way. Ethical system design can draw its inspiration from the Aristotelian approach to ethics. This classic perspective emphasizes the importance of human values and virtues worth striving for in order to reach “eudaimonia”, which might be described as a state of self-actualization or well-being (see the contribution of Alexander Maedche, “Designing Information Systems for Well-being,” above). In his Nicomachean Ethics, (Aristotle 2000) focused on human virtues he deemed important, such as courage, kindness, justice, and many others–all values of human conduct that are undermined by current IS. Value-based Engineering aims to avoid these adverse effects on virtues. It is about anticipating, assessing, and formulating system requirements that go beyond efficiency, profit and speed, as well as those non-functional value requirements that have already earned their place in traditional system design, such as usability, dependability or security.

In the past five years, values and virtues have been put forward in a myriad of listings by companies and global institutions (Jobin et al. 2019), as well as by legislators. An example is the ALTAI list of the EU Commission’s High Level Expert Group on artificial intelligence (HLEG of the EU Commission 2020). Values called for in such listings include transparency, fairness, non-maleficence, responsibility, privacy, human autonomy, trustworthiness, sustainability, dignity, and solidarity. However, using such preconfigured value listings to build an ethical system is not sufficient. In fact, a lot of valid criticism has been voiced concerning the straightforward application of these lists in practice. This is because ethics is essentially contextual, and there is a risk of applying the logic of the list to problems that don’t fit these lists. More importantly, value listings do not tell engineers how to effectively embed and respect values in the technical system design. “The truly difficult part of ethics—actually translating normative theories, concepts and values into good practices …is kicked down the road like the proverbial can. Developers are left to translate principles and specify essentially contested concepts as they see fit, without a clear roadmap for unified implementation” (Mittelstadt 2019 p. 503).

Some scholars in the field called “machine ethics” (Anderson and Anderson 2011) have taken up this challenge and made attempts to bring ethics closer to system-level design by developing ethical algorithms. These algorithms typically follow a simple weighing of harmful and beneficial decision consequences (an approach called Utilitarianism), or they follow a duty ethical approach where specific human principles are optimized (e.g., fairness). The work on ethical algorithms culminated in MIT’s “Moral Machine Experiment” to inform the evasive actions of autonomous cars (Awad et al. 2018) with the help of “trolley economics.” A shortfall of Machine Ethics (including the Moral Machine Experiment) is that the vast majority of its proposed algorithms is based only on utilitarianism or on duty ethics (Tolmeijer et al. 2020). In contrast, Virtue Ethics, which is one of the most timely and influential streams of moral philosophy, seems to be completely ignored when ethical algorithms are conceived (Tolmeijer et al. 2020). This is a pity considering its recognized importance for technology design (Vallor 2016). Virtue ethics aims to foster the value of human conduct. Its goal is to strengthen humans. Instead of aspiring to maximum algorithmic autonomy, virtue ethical algorithms would probably follow a different design paradigm, one that relies more on human interaction and that strives to improve the human decision maker instead of taking decision autonomy away from him or her. For this reason, it is regrettable that so little research is attributed to this form of potential Machine Ethics.

Machine Ethics and the intense public debate of MIT’s Moral Machine Experiment has also taken attention away from what I would argue are much more relevant challenges for a more ethical IS world. These challenges include, among others, system-of-system control issues, data quality issues, sustainability issues, human control issues, as well as the ignorance of a system’s long-term 2nd order value effects on stakeholders. Some of these grander challenges of ethical system design are anticipated by scholars working in value-sensitive design (Friedman and Kahn 2003) or participatory design (Frauenberger et al. 2015); however, the problem is that these works often get bogged down in the identification of very specific problems for which its authors find very specific technical solutions, but lack a generally applicable methodology to address value challenges across contexts.

Here, I believe, an important research opportunity opens up for the IS community, which has been historically strong in method design and modeling. One might say that a proper system development life cycle (SDLC) model is missing for ethical and value-based engineering. The only rigorous approach currently available to fill this gap is the IEEE 7000™ standard (IEEE 2021). IEEE 7000™, which is at the heart of what has been called Value-based Engineering. The standard provides engineers with a clear system design and development framework; or in other words, an ethical SDLC (Spiekermann 2021). It uses various ethical theories to elicit relevant values, and subsequently prioritizes these with the help of corporate or industry value listings. It then derives a new artifact called “ethical value requirement” (EVR) that is translated into system requirements. System requirements are derived with the help of risk assessment.

Whether Value-based Engineering with IEEE 7000™ will be taken up on a large scale remains to be seen. Early trials, however, show that if companies really want to build and operate their IS in an ethical way they will need to consider their “value proposition,” which means not only changing the technology they build but also their business models (see the contribution of Alexander Maedche on “Designing Information Systems for Well-being in Private and Work Life” above). True value creation is not a matter of technology design alone but also of strategy, corporate culture, and companies’ willingness to forgo some profit for the sake of community, integrity, and accountability.

Selected Values of Outstanding Importance for IS Research

Health and Well-being

Henner Gimpel and Alexander Benlian

Health and well-being are intrinsically and instrumentally valuable (Frankena 1973; Ryan et al. 2008) and are closely intertwined. The World Health Organization suggests that “health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” (WHO 1948, preamble). Philosophers have criticized this definition for being too all-encompassing (e.g., Callahan 1973). Nevertheless, health is not only statistical normality but also a normative ideal (Nordenfelt 1993). It is a prerequisite for flourishing and living a fulfilling life. For this reason, it is no surprise that “good health and well-being” is one of the United Nations’ Sustainable Development Goals.

There is ample evidence for IS both promoting and weakening health and well-being. Let us consider the dark side first: a side effect of digitalization is the impairment of psychological and physical health (Gimpel and Schmied 2019). Interruptions by information and communication technologies (ICTs), techno-overload, blurred boundaries between the workplace and the private domain and other digital stressors often result in exhaustion, cognitive and emotional irritation, and physical illness (Chen and Karahanna 2018; Benlian 2020; Califf et al. 2020). Pirkkalainen and Salo (2016) reviewed two decades of research on this dark side of ICT use. Among the four phenomena they identified, three impaired health and well-being: technostress, IT addiction, and IT anxiety. These phenomena of ICT use may have detrimental influences on individuals, for example, in the form of loneliness (Matook et al. 2015), burnout (Srivastava et al. 2015), or diseases of the musculoskeletal or cardiovascular system (Gimpel et al. 2019).

On the bright side, ICTs also seem to promote certain aspects of health and well-being. Healthcare is a shining example of how digitalization can achieve higher efficiency and effectiveness. Examples at the individual level are the support of patient self-management by m-health apps (Gimpel et al. 2021) and health education and disease prevention (Kirchhof et al. 2018). The interaction of patients and providers via patient portals improves health outcomes (Bao et al. 2020). At the organizational level, effective use of ICT affords improved efficiency and effectiveness in healthcare processes (Burton-Jones and Volkoff 2017; Gimpel and Schröder 2021). At the societal level, ICT supports public health as, for example, witnessed in the COVID-19-pandemic, where ICT aided the containment of infections via physical distancing, working from home, and contact tracing (Adam et al. 2020; Trang et al. 2020), as well as analysis, modeling, prediction of the pandemic, and managing vaccination campaigns (Klein et al. 2021). Chen et al. (2019) conducted a bibliometric study of health IS research from 1990 to 2017. They identified major research themes, such as “Clinical Health IS,” “Administrative Health IS,” and “Consumer Health IS,“ that are covered in many research papers. The premise remains beyond the realm of Health IS that individual assistance systems and other ICTs can support users’ eudaimonic well-being by helping them in their pursuit of virtues and excellences (e.g., via provision of product information and context information for ethical consumer decisions), by abetting continuous reflection on goals and actions (e.g., via self-tracking of behavior and goal achievement), by encouraging self-affirming attitudes and self-knowledge (e.g., via online self-help communities for patients with rare diseases), and by promoting exercise of reason and free will (e.g., via provision of health information to allow for a more informed and balanced discussion with healthcare professionals). However, for each of these potential positive effects, there are contrarian examples. Thus, to what extent this claim is true certainly deserves more research attention (see also the contribution of Annika Baumann, Irina Heimbach and Hanna Krasnova on “Digitization of the Individual” below).

While we have many case examples of the beneficial effects of ICT on health and well-being in specific contexts, we lack a unifying and overarching theoretical perspective on these effects. Thus, we should continue behavioral and design-oriented work on situated observations or instantiations and substantive theories. Simultaneously, we should work towards more abstract mid-range or potentially even grand theories of how ICT may promote health and well-being. Regarding the dark side of digitalization, more research is needed to identify and conceptualize the risks and side effects of digitalization. Furthermore, we should leverage our competencies in design-oriented work to envision preventive measures that might mitigate or nullify these adverse effects (see the contributions of Alexander Maedche and Sarah Spiekermann above).

Trust in Automation

Annika Baumann and Björn Niehaves

In recent decades, our lives have undergone a tremendous transformation, with automation increasingly permeating professional and private contexts. At the heart of automation are algorithms that represent “a sequence of unambiguous instructions for solving a problem, that is, for obtaining a required output for any legitimate input in a finite amount of time” (Levitin 2003, p. 3). Algorithms provide the basis for machine learning and artificial intelligence, which use the underlying instructions either learned via input data or explicitly programmed. Algorithms work across multiple areas of our lives and range from viewing personalized feeds on social media (Lazer 2015) to potentially riding in autonomous cars in the near future (Choi and Ji 2015). With users increasingly relying on automation in private and professional settings, trust constitutes a critical component (Glikson and Woolley 2020) as it is one of the primary drivers to adopt the technology and for an individual to autonomously follow suggested actions (Benbasat and Wang 2005; McKnight et al. 2011; Freude et al. 2019).

Two conceptualizations of trust are currently prevalent in the context of user interaction with technological artifacts. The first conceptualization aligns trust with the more human-like trust dimensions such as integrity, competence, and benevolence (Benbasat and Wang 2005). A second perspective incorporates technological particularities using more system-like dimensions such as reliability, functionality, and helpfulness (McKnight et al. 2011). Importantly, how trust shapes the boundaries of human-automation-interaction seems to depend on several factors, including human character, the underlying automation itself, and the surrounding environment where the interaction takes place (Schaefer et al. 2016). Thus, the socially constructed meaning of terms associated with automation influences individuals’ expectations of technological characteristics, potentially resulting in cognitive biases and erroneous assumptions regarding the system (Felmingham et al. 2021). Consequently, vital pre-conditions for a successful collaboration between humans and technology, like trust, are already shaped before an interaction occurs. Nevertheless, since trust has a dynamic element (McKnight et al. 1998), it changes with the experiences made upon interacting with automation. Overall, trust between humans and technology appears to be a multi-faceted, time-sensitive phenomenon that needs further investigation, with specific consideration of the nature of its initial development and its course over time.

State-of-the-art research discusses both negative and positive implications of automation. On the bright side, research discusses the economic capabilities and associated success chances of automation (Pasquale 2015). For example, it has been shown that automating algorithms can provide more accurate predictions than humans in various contexts (Cheng et al. 2016; Kleinberg et al. 2017). Thus, automation can offer a fertile ground for economic gains across industries. Furthermore, the algorithm-enabled large-scale analysis of data seems to support the tackling of global challenges such as climate change (Rolnick et al. 2019). At the same time, the dark side of automation and algorithmic decision-making has been increasingly in the spotlight of scholarly attention (O’Neil 2016; Eubanks 2018). For example, automation has been shown to create biases towards specific entities (e.g., Lambrecht and Tucker 2019; see also the contribution of Irina Heimbach, Oliver Hinz and Marten Risius on “Bias, Fairness, and Transparency” below), and to facilitate extremists’ views through the algorithm-induced creation of echo chambers on social media platforms (e.g., Kitchens et al. 2020; see also the contribution of Antonia Köster and Marten Risius on “Fake News and Online Extremism” below).

While research into how individuals, organizations, and society interact with automation is gaining traction, several research gaps remain. As algorithmic automation increasingly establishes itself as a new norm, future studies need to shed more light on the underlying mechanisms that are at play when users are interacting with it. As user perceptions play out between the poles of algorithm aversion (Dietvorst et al. 2015; Jussupow et al. 2020) and algorithm appreciation (Logg et al. 2019), obtaining a more in-depth understanding of the factors influencing user attitudes towards algorithms appears especially critical. For example, just like their human counterparts, algorithms are imperfect; that is, they may and do err, as no system reaches a level of complete perfection (Martin 2019). These mistakes, however, may severely diminish trust towards automation, leading to changes in individual attitudes and perceptions in the short and long term (e.g., Dietvorst et al. 2015; Prahl and Van Swol 2017). Hence, further investigation into how trust can be repaired after such instances of failure constitutes another promising avenue for future research.

Algorithmic Bias, Fairness and Transparency

Irina Heimbach, Oliver Hinz and Marten Risius

Against the background that artificial intelligence-based predictions are said to be often faster, cheaper, more reliable, and better scalable than predictions made by humans (Mei et al. 2020), artificial intelligence technologies have found their way into businesses in virtually all industries (McAfee et al. 2012), influencing and transforming many of the societal decisions that we make today (Cowgill 2018). However, there is also the risk that decision-making supported or automated by algorithms may unintentionally and unexpectedly shape societal outcomes for the worse (see Rahwan et al. (2019) for a discussion). The issues of bias, fairness, and transparency relate to the core of IS research.

Such biases can be caused by four problems: First, the data for training can be biased. Second, the model of the algorithm itself may be a possible cause for discrimination. Third, the presentation form of the information given by the algorithms can lead to unfair decisions. Finally, the user trying to use the system can come up with a biased or misinformed decision. Policymakers try to address these potential problems by prescribing high degrees of transparency and explainability.

Researchers and practitioners point to an increasing amount of evidence that indicates how the broad use of algorithms can lead to an inferior treatment of already disadvantaged parts of society, thereby contributing to even more societal tensions, a phenomenon frequently referred to as algorithmic discrimination (Sweeney 2013; Ensign et al. 2017; Lambrecht and Tucker 2019; Obermeyer et al. 2019). Reported examples are autonomous recruitment systems with a gender bias (Mann and O’Neil 2016) or jurisdictional decision support systems suffering from a racial bias (Polonski 2018). Biased or discriminatory decision-making resulting from defective algorithms or data is a prototypical example for research following an imperative technical approach (Sarker et al. 2019). This line of research considers technology as the major antecedent to social outcomes and human decision-making. At the same time, IS researchers should acknowledge that biased data is also the result of real-world discrimination. It reflects how humans design organizational processes. Biases in algorithms may (unknowingly) be introduced through the developers’ background and upbringing. This view conceptualizes bias and fairness issues as a result of the interplay between socio-technical components and, hence, is prototypical for IS Research (Sarker et al. 2019).

Regulators and researchers have identified transparency as a key to avoid bias and ensure fair algorithmic decision-making. However, even if we were able to openly obtain access to relevant algorithms and data, there would still be natural barriers to transparency that need to be overcome. First, there is an issue of how to even assess the degree to which algorithm-based decisions are biased. Relating to this issue is the question of what corrective actions to undertake (e.g., which observations to ex-/include) to rectify the biased data. And lastly, we need to find ways to disentangle these black-box algorithms and make them explainable or at least interpretable (Kim and Routledge 2018). By overcoming these transparency issues, IS researchers can contribute to a better society and solve issues of biases and discrimination.

The interplay-oriented perspective between socio-technical components should also consider the societal implications of the increased exposure to algorithms (Sarker et al. 2019). As algorithms become increasingly ubiquitous, research needs to consider the organizational implications of personally distorted attitudes towards algorithms, such as automation bias, algorithm aversion, and the fear of technology paternalism. By addressing these issues, IS scholars can offer a substantial contribution to the betterment of society (Majchrzak and Markus 2012).

The current state of research on algorithmic transparency, fairness, and bias could, in general, be characterized by two streams of work. The first stream embraces discussion papers of a prescriptive and conceptual nature (e.g., Burrell 2016; Carlson 2017; Hosseini et al. 2018; Felzmann et al. 2019) with a special focus on developing fair, transparent, and explainable/interpretable algorithms (Rudin 2019; Rai 2020). The second stream consists of empirical studies that aim to go beyond the anecdotal evidence of algorithmic bias and discrimination (Kleinberg et al. 2017; Lambrecht and Tucker 2019) and investigate the general role of algorithms and data characteristics on trust building and the individual’s attitudes towards algorithmic management (Kizilcec 2016; Lee 2018; see also the contribution by Annika Baumann and Björn Niehaves on “Trust in Automation” above). A challenge is that previous research is scattered across various disciplines and tends to focus on specific aspects of the problem while neglecting a more holistic IS view that algorithms are part of a socio-technical system, which connect tasks, humans, technology, and various levels of decision-making contexts.

IS research as a cross-sectional discipline with a long tradition of looking at IT as a sociotechnical system has a great opportunity–and the capability–to make substantial contributions to future research. First, IS theorists paired with researchers from other disciplines can elaborate on a unified and concise understanding and measurement of the concepts of algorithmic transparency and fairness. Second, IS engineers can develop system and data requirements as well as validation tests for fair and transparent algorithms. Third, behavioral IS researchers can empirically test how algorithmic characteristics (perceived transparency and fairness) affect decision-making behavior, or how they reveal human and organization-related rather than technology-centric issues that lead to potentially undesired outcomes like bias and discrimination.

Selected Challenges Addressable by IS Research

Digital Work, Digital Labor Markets, and Gig Economy

Alexander Benlian and Henner Gimpel

Digital, platform-mediated labor markets (e.g., Uber, Airbnb, Amazon Mechanical Turk) have permeated many economic sectors by now, provoking debate about the implications of this form of “gig” work organization. Most accounts emphasize the problematic effects on gig workers and ask questions about algorithmically controlled labor processes and the increasing precarity in such digital labor markets.

Are digital labor markets akin to digital cages? Scholars following such a starkly dystopian perspective ominously question what happens when the boss is an algorithm, which uses anopticon powers to continuously monitor and sanction workers (Curchod et al. 2020; Möhlmann et al. 2021). Algorithms encode managerial decisions and workplace rules into the digital tools that workers must use to complete their tasks. In this way, workers’ autonomy to resist, elude, or challenge the rules that platform providers establish as conditions of participation is severely constrained. In addition, platforms individualize and alienate their labor force, depriving workers of interpersonal contact spaces that have traditionally made it possible for workers to challenge managerial authority (Kellogg et al. 2020).

Are digital labor markets catalysts of precarity? According to this view, platforms are a manifestation of a much broader trend that has enabled firms to externalize risks which they had previously been compelled to shoulder. The effect is to bereave the worker of long-standing social protections such as a minimum wage, safety and health regulation, retirement income, health insurance, and worker compensation (van Doorn 2017). The issue, in this view, is thus a broad socioeconomic shift that dismantles many of the labor market shelters which workers had previously enjoyed, leaving them in an increasingly vulnerable position (Schor et al. 2020).

While previous research has looked into several critical aspects of platform labor markets affecting gig workers, such as legitimacy, fairness, privacy, and marginalization (e.g., Deng et al. 2016; Wiener et al. 2020; Möhlmann et al. 2021), we believe that there are several opportunities for further research:

First, it would be worthwhile to hone in on the values and ethics inscribed into algorithms that select, match, guide, and control workers in digital labor markets (Saunders et al. 2020; see also the contribution of Irina Heimbach, Oliver Hinz, and Marten Risius on “Bias, Fairness, and Transparency” above). The encroaching influence of machine learning algorithms–which can embed and reproduce inherent biases and threaten to entrench the past’s societal problems rather than redress them (Rosenblatt 2018)–is particularly evident in dynamic pricing and matchmaking between customers and workers (algorithmic matching), as well as in screening workers and guiding their behavior (algorithmic control) (Möhlmann et al. 2021; Wiener et al. 2022). The values of privacy, accountability, fairness, and freedom of access are increasingly coming to the fore of discussions around digital labor markets (Deng et al. 2016) and big digital platforms more generally (van der Aalst et al. 2019).

Second, there is an abundance of research on platform operators and service providers, yet a dearth of research on the developers who create the matching and control algorithms at the core of the platform’s operations and scalability (Vallas and Schor 2020). Developers, who are often independent contractors themselves, are exposed to severe tensions between the platform operator’s goals and the gig workers’ interests, and may revolt when fundamental labor rights are violated. How do developers relate to algorithmic design’s potentially manipulative and invasive consequences for the workers’ livelihood and cope with value conflicts on a daily basis? On a broader note, we know very little about the process by which algorithms come into being, are negotiated between different parties and updated over time. What purposes and values drive the design and operation of digital labor platforms?

Third, from the perspective of gig workers, an interesting avenue for future research is an inquiry into practices of and prospects for collective action: The various forms of resistance and “algoactivistic practices” to circumvent or subvert algorithms are particularly prevalent in digital labor markets, yet still largely under-investigated (Kellogg et al. 2020). How and why do workers comply with or deviate from algorithmic management on platforms? Can workers join forces with the customers they serve, altering the “geometry of power” (Rahman and Valentine 2021) in this triadic relationship between platform providers, customers, and workers?

Personal Data Markets and Surveillance Capitalism

Manuel Trenz

With personal data dubbed the oil of the digital economy and a key to competitive advantage, it is no surprise that there is a market for individuals’ data. In fact, there has always been one, with credit reporting agencies and consumer data brokers collecting and selling data on individuals for decades. However, the scope of available, collected, and aggregated data has expanded significantly through the rise of digital platforms that now track every action individuals conduct online and even combine offline and online data sources.

As a consequence, a large number of firms have emerged that collect, aggregate, analyze, package, and sell data about individuals. This, in turn, has led to more refined targeting options with, for instance, advertisers on Facebook being able to select their target audiences based on demographics, education, financial details, life events, parental and relational status, interests, specific behaviors, etc. (Facebook, Inc. 2021). While Facebook and Google are the most visible examples of such companies, many others operate in the shadows and beyond public attention (Schneier 2015; Melendez and Pasternack 2019). For example, Acxiom Corporation offers data on more than 700 million individuals worldwide by merging data elements from hundreds of sources (Acxiom 2018). These data include demographics, political views, economic situation, health, relationship status, activities, interests, consumption preferences, as well as psychometric characteristics. While firms benefit from improved risk prediction, targeting, or innovation opportunities, these personal data markets come with significant problems for individuals, social systems, politics, and economics (Spiekermann et al. 2015b). The most obvious issue is the question of information privacy, as individuals lose control over their data. Beyond that, detailed profiles give rise to discrimination based on race, gender, or income. Moreover, they may also simply result in wrong inferences, as these profiles can be erroneous, drawn from merged, incomplete or faulty datasets (see also the contribution of Irina Heimbach, Oliver Hinz, and Marten Risius on “Bias, Fairness, and Transparency” above). This can lead to situations where individuals are rejected from loan applications, jobs, memberships, or even denied bail without having access to the database against which they are judged and left with little options to influence or delete the data and contest the inferences collected about them. As the data in today’s personal data markets is most of the time collected, aggregated, analyzed, and sold without individuals’ knowledge or even without a truly informed consent, those markets have aroused the interest of regulators. Moving beyond the individual level and considering the economy as a whole, regulators are worried about the consolidation and aggregation of market power towards a few large platforms (Parra-Arnau 2018) that can exercise manipulative powers. Considering the key role of personal data in today's economy, exclusive access to these data may lead to excessive market dominance and hamper competition.

Touching upon topics such as market design and digital platforms (e.g., Bimpikis et al. 2019), (inter-organizational) data-driven innovation (e.g., Kastl et al. 2018; van den Broek and van Veenstra 2018), and information privacy (e.g., Karwatzki et al. 2017), personal data markets are a phenomenon at the center of interest of IS research. Because personal data markets are highly intrusive into the intimate lives of individuals, research on this topic requires a perspective that extends well beyond technological and economic issues.

Prior studies on personal data markets can be structured along three major research streams. The first stream has investigated the development and functioning of existing personal data markets. This includes studies that uncover and classify personal data markets and their business models (Agogo 2020; Fruhwirth et al. 2020). We also have first insights into the role of technological implementations to collect data across platforms (Krämer et al. 2019) and into strategic choices made by the data market providers (Zhang et al. 2019). A second stream of research is concerned with the valuation of personal data (Gkatzelis et al. 2015; Spiekermann and Korunovska 2017) and approaches aimed at allowing people to participate in the economic value of their information (Wessels et al. 2019). Prior studies investigating digital self-disclosure have often employed a privacy calculus perspective, which suggests that users weigh the perceived benefits against the perceived risks of sharing data as a basis for their decision-making (Dinev et al. 2015; Abramova et al. 2017). However, the rationale of benefit or value in this context is usually limited to the value that individual users gain from their consumption or participation but ignores that the economic value derived from personal data extends far beyond this. While users provide or generate the data that enables personal data markets to create value, they often play no role in determining how these data are used nor participate financially. If individuals were to actively participate in those markets, they appear to have preferences for data markets that preserve their anonymity (Schomakers et al. 2020). Such participatory personal data markets could then make use of developed mechanisms through which individuals may decide on which data to conceal at what price (Parra-Arnau 2018). The third stream of research pertains to studies on the ethical, legal, and societal impacts of personal data markets, which have mostly centered around the phenomenon of privacy itself (Spiekermann et al. 2015a). From a regulatory perspective, studies have investigated the implications of existing policies such as GDPR on the design of IS (Jakobi et al. 2020) and formulated the need for different policy interventions to protect, for instance, the weakest groups in our society (Montgomery 2015).

Given the significant economic and societal impact of personal data markets and the attention they received from regulatory bodies, media, and companies participating in the digital economy, research on personal data markets is comparably scarce. Beyond an expansion of the research streams described above, future research should investigate alternative approaches to personal data markets with the goal of making them less intrusive. From an economic perspective, this includes considering competitive strategies and business models for participatory, responsible, user-centered personal data markets to make them a sustainable alternative to current models. From a technological and regulatory perspective, we still lack effective solutions that empower individuals to take control of what data traces they leave behind, what data about them is being stored, what inferences are drawn from it, and how others use it. From a societal and ethical perspective, the implications of existing personal data markets seem to be predominantly negative. However, there also seems to be a significant social value in personal data for research, crisis management, health management, and innovation that could be unlocked by advancing approaches to how behavioral, perceptual, or medical data can be shared ethically and responsibly.

The unique combination of technological and economic expertise should allow IS researchers to be at the forefront of guiding and monitoring the development of ethical personal data markets, informing regulatory bodies, and facilitating an informed, consent-based release and use of personal data for the social good.

Online Misinformation and Extremism

Antonia Köster and Marten Risius

Social media platforms such as Facebook, Twitter, and YouTube have transformed how information is produced, consumed, and disseminated. While empowering users with the opportunity to participate and with access to knowledge, news and opinions of others, this transformation has also been accompanied by a rise in misinformation campaigns (Lazer et al. 2018), which are frequently exploited by extremists to further their malicious agenda (Winter et al. 2020). Indeed, as any user is potentially a content creator, social media platforms have developed into a breeding ground for misinformation (Kim and Dennis 2019).

Over the past few years, the spread of misinformation has led to considerably negative individual, economic and societal implications. For example, the sharing of fake news on the COVID-19 pandemic has escalated and caused misinformation on public health matters (Laato et al. 2020), directly impacting individual well-being (Brennen and Nielsen 2020; Apuke and Omar 2021). Furthermore, fake news in combination with social media bots and micro-targeted political advertisements played a decisive role in the outcome of political events, such as the UK referendum on EU membership and the US presidential election in 2016 (Allcott and Gentzkow 2017; Liberini et al. 2020). Beyond politics, fake news can have an impact on the economy. Fake stories may attract the attention of financial market investors and thereby lead to stock market reactions (Vosoughi et al. 2018; Clarke et al. 2020). Hence, misinformation that is created and disseminated with the help of digital technologies has grave implications in the modern age.

Despite the pervasiveness of online misinformation and, in particular, fake news, we currently lack an understanding of the enabling characteristics of technology and its unique role in these processes. Some research points out that not only users generate fake news but also technology can be used to do so (Calvillo et al. 2021; Bringula et al. 2021). For instance, artificial intelligence can be used to create comments on news articles or even generate the articles themselves (Zellers et al. 2019). An emerging technological development that is gaining attention among researchers studying misinformation are “deepfakes” (Westerlund 2019; Liv and Greenbaum 2020). Deepfake is a portmanteau of “deep learning” and “fake” and describes hyper-realistic video manipulation based on neural networks (Westerlund 2019). These deep learning algorithms enable facial mapping (i.e., swapping an individual’s face in a video with another), and they have been found to have a powerful effect on creating false memories (Liv and Greenbaum 2020). At the same time, technology is not only used to create misinformation but also to detect it. Tech companies rely on machine learning or artificial intelligence to automatically detect fake news online (Woodford 2018; Newman 2020). However, users respond differently to these fact-checking services. While some perceive such services as useful and respond mindfully to identified fake news, other users do not trust these detection algorithms (Brandtzaeg et al. 2018). To further complicate the detection issue, research points towards an “implied truth effect”. This describes the phenomenon that flagging some articles as fake news makes users automatically assume that other non-flagged articles are truthful–even if they have not yet been fact-checked (Pennycook et al. 2020). In this context, further research is needed to address the challenges of technologically enabled misinformation detection and creation (e.g., deepfake videos) (Shu et al. 2020).

The adverse effects of online misinformation have prompted researchers to investigate the interaction between humans and technology regarding what may explain higher susceptibility to fake news (e.g., Bryanov and Vziatysheva 2021; Sindermann et al. 2020). When summarizing the findings of scholarly articles on the topic, Bryanov and Vziatysheva (2021) identify three broad categories of determinants; namely, message characteristics, individual factors, and accuracy-promoting interventions. Several researchers have examined the importance of belief consistency and confirmation bias (Kim and Dennis 2019; Sindermann et al. 2020; Calvillo et al. 2021; Bringula et al. 2021), referring to the tendency of people to be more susceptible to fake news that aligns with pre-existing values, beliefs, or political views. Second, individual factors, including cognitive modes, predispositions, and news and information literacy differences may determine individual susceptibility to fake news. For example, lower trust in science, media and government (Roozenbeek et al. 2020), specific personality traits (e.g., lower levels of agreeableness, conscientiousness, open-mindedness, and higher levels of extraversion), as well as certain media consumption characteristics (e.g., amount of Instagram visits and more hours of news consumption) have been linked to increased susceptibility to misinformation (Calvillo et al. 2021; Bringula et al. 2021). Additionally, emotional factors, such as higher levels of emotionality, have been linked to susceptibility to fake news (Martel et al. 2020). Finally, accuracy-promoting interventions, such as specific warnings or nudges that make individuals reflect the truthfulness of information, may influence the credibility of fake news. The problem of misinformation is further exacerbated by the social media platforms’ algorithmic filtering that exposes users to news and content based on their interests and past behaviors, thereby facilitating repeated exposure to more misinformation (Kitchens et al. 2020). Further research that explores the interaction between the human or social factors and the technological aspects of fake news will help to better understand the individual’s susceptibility to online misinformation.

Beyond being harmful by its very nature, online misinformation also supports online radicalization and extremism, as prominently evidenced by the recent attacks on the US capitol (Kanno-Youngs and Sanger 2021). Online extremism has become a pressing issue on social media platforms as highlighted, for example, by FBI Director Christopher Wray stating that “social media has become, in many ways, the key amplifier to domestic violent extremism” (Volz and Levy 2021, p.1). Digital technologies have enabled this new form of extremism that presents various unique challenges; these include the rapidly changing technological landscape (Fisher et al. 2019; Winter et al. 2020) as well as the extremists’ abilities to leverage these new technologies for their malicious purposes (Conway 2017) and to respond to counter-extremist measures (e.g., platform migration) (Conway and Macdonald 2019; Nuraniyah 2019).

Currently, platform providers and third parties (e.g., government authorities, NGOs) struggle to develop and implement effective measures to combat misinformation and online extremism (e.g., Sharma et al. 2019). This is partly a result of the unique technological implications that are insufficiently understood. For example, extremism is in essence a strong deviation from something that is considered “normal” or “ordinary” (Winter et al. 2020). Online services that operate globally face region-specific understandings of humanist values and societal norms, which lead to a different understanding of what is locally understood as extreme. When proposed countermeasures to online extremism such as content moderation or account tracing and removal lack the region-specific awareness, they threaten to violate civic liberties such as the freedom of speech and personal privacy (Monar 2007; Nouri et al. 2019). Against this background, the field of IS, with its sociotechnical perspective on the interaction between social elements (individual and group norms) and the technical artifact (e.g., encrypted services, global platforms), is in a favorable position to support tech companies and regulators by comprehensively considering the interactions between technological and social components. In this way, research can help to assess and alleviate growing concerns that the increasing ability to interact online may not only lead to undetected disinformation but also contribute to more polarized societies as individuals adopt more extreme views (Kitchens et al. 2020; Qureshi et al. 2020). In this context, IS research should address this comparatively open field by shedding light on the relationship between on- and offline radicalization, how online technologies (e.g., different social media platforms, content stores, blockchain technologies) attract and support online extremist activities, and what strategies online extremists pursue to counter regulatory measures (e.g., migrating to fringe platforms, adopting peer-to-peer encrypted technologies).

Digitization of the Individual

Annika Baumann, Irina Heimbach and Hanna Krasnova

The use of digital technologies for private purposes is steadily increasing. For example, the number of smartphone users reached 3.6 billion in 2020 and is projected to grow even further (Statista 2021a). In addition, the average time spent on social media a day amounts to more than 2 h daily worldwide (Statista 2021b). The market of fitness and activity trackers that allow users to monitor their health-related behaviors (e.g., daily steps, heart rate, sleep) is booming, with “end-user spending on wearable devices” worldwide expected to reach US$81.5 billion in 2021 (Gartner 2021). With social media, smartphones, smartwatches, and other digital technologies rapidly becoming an integral part of life for consumers across the world, a growing number of stakeholders voice the need to better understand the implications of this ongoing transformation. Within this development, the paradigm of the “digitization of the individual” has become a central issue for IS research (Vodanovich et al. 2010; Vaghefi et al. 2017; Turel et al. 2020). At its core, it implies that digital technologies heavily influence user perceptions, cognitions, emotional reactions, and behavior (Vanden Abeele 2020), and can thereby contribute to individual and societal outcomes. However, scientific evidence on the direction and strength of the effects remains contradictory.

On the one hand, the rise of the use of digital technologies has been met with optimism. Inventions such as the use of a mobile app and wearable device have been linked to weight loss, for example (Kim et al. 2019). In the context of vulnerable groups, the growing use of smartphones has been shown to support communication, contribute to user safety, enable political and social participation (AbuJarour and Krasnova 2017), and lead to user empowerment (AbuJarour et al. 2021). Similarly, social media platforms were initially hailed for their potential to facilitate social interaction, promote feelings of social connectedness (Koroleva et al. 2011), and enhance social capital for millions of users worldwide (Ellison et al. 2007). On the other hand, the use of digital technologies has also brought a lot of disillusionment regarding the unintended negative impact of the growing digitalization of individuals above and beyond what was expected. A journalistic investigation revealed that sensitive data provided by users during their app use (e.g., details on users’ diet, exercise activities, ovulation cycle) was shared and reused for commercial purposes (Schechner and Secada 2019). Furthermore, smartphone use has been associated with a multitude of adverse effects, ranging from worsened sleep (Demirci et al. 2015; Huang et al. 2020) and deteriorated relational cohesion (Krasnova et al. 2016) to poor academic performance (Lepp et al. 2014), anxiety, and depression (Demirci et al. 2015). In a similar vein, participation in social media has been shown to be addictive (Hou et al. 2019) and has been linked to exhaustion and fatigue (Bright et al. 2015), increasingly bad mood, lower life satisfaction (Kross et al. 2013), symptoms of depression (Cunningham et al. 2021), and body dissatisfaction (Tiggemann and Zaccardo 2015). For comprehensive meta-analyses we hereby refer exemplary to the works of Appel et al. (2020), Huang (2017) and Liu et al. (2019).

ICT-enabled changes in perception at the micro-level may also collectively contribute to the emergence and proliferation of issues affecting society at large. For example, the time spent on social media has been linked to lower perceptions of inequality, which may skew redistribution preferences and affect corresponding voting behavior (Baum et al. 2020). In a similar fashion, social media use has been shown to influence users’ political views, giving rise to echo chambers and contributing to polarization (Barberá et al. 2015). Furthermore, hostile expressions common on social media platforms (Crockett 2017) can potentially have an invidious effect on users, interfering with such socially relevant behaviors as free expression and participation in political processes and social life. Considering the far-reaching potential of these technologies to affect individuals and society at large, IS research has an opportunity to make a substantial contribution in the following directions:

First, the understanding of the “digitized individual” paradigm should be unified. For example, Turel et al. (2020) define a digitized individual as someone who uses at least one digital technology. In contrast, Kilger (1994) refers solely to virtual identity, while Clarke (1994) describes a “digital persona” to be a model of an individual based upon the data collected and analyzed about this person. Better alignment of terminology used in the scientific discourse and across multiple disciplines can promote more targeted exploration into this phenomenon.

Second, while individual outcomes and, as a consequence, societal outcomes of digital use can be far-reaching, the mechanisms behind them are still little understood. For example, concerns about the way social media platforms and content creators influence and bias our perceptions of reality become increasingly pressing. How, and in which specific ways, does the use of digital platforms and applications change our perception of ourselves, others, and the world around us? How do changes at the individual level translate into societal consequences? And what can be done to mitigate those detrimental developments?

Third, whereas past research has mainly focused on interpersonal differences when exploring the link between the use of digital technologies and individual outcomes, a new generation of studies advocates a stronger focus on longitudinal approaches that allow the exploration of the role of within-person differences (Beyens et al. 2020; Kross et al. 2021; Valkenburg et al. 2021b). For example, in a recent study by Valkenburg et al. (2021a, p. 56), 88% of adolescents “experienced no or very small effects” from social media usage (captured as an aggregate measure of self-reported time on WhatsApp, Instagram, and Snapchat) on self-esteem. At the same time, 4% of adolescents experienced positive effects, while 8% of adolescents experienced negative effects. Therefore, a more in-depth investigation into the within-person processes is needed. Furthermore, since a large share of studies into the individual outcomes of digital use are correlational, experimental approaches should be pursued with greater enthusiasm, as they allow causal inferences to be made about the relationships at play (e.g., Allcott et al. 2020; Brailovskaia et al. 2020; große Deters and Mehl 2013).

Fourth, methodological issues regarding the measurement of media use have been raised. Specifically, a large share of previous studies relied on retrospective self-reports to measure digital technology use by participants (e.g., in the form of constructs measuring “use,” or self-reporting of time spent). However, a recently published meta-analysis raises concerns about the validity and accuracy of this approach as there seems to be only a moderate correlation between self-reported and logged metrics, concluding that the users either under- or over-report their digital media use (Parry et al. 2021). Future research should capture objective measures of platform use whenever possible, as well as strive for better operationalization of different aspects of digital media usage (Faelens et al. 2021). Importantly, in light of this, findings based on self-reported measures should be received with caution and verified for robustness with direct measures of actual behavior.

Fifth, whereas fitness and activity trackers and other mobile apps hold significant potential to improve users’ health and well-being, their use may inherently conflict with such fundamental values as the individual right to privacy, self-determination, and autonomy. Indeed, the data traces users leave behind can also be misused as part of scoring systems, or to make predictions about users’ future performance at work or about future health outcomes. Hence, a more profound discussion of which values should be prioritized and how those tensions can be resolved might be necessary.

Finally, when it comes to exploring the detrimental outcomes of digital use, future research should focus on proposing and testing the effectiveness of corrective actions to mitigate the adverse effects of digital technology use for individuals (e.g., lower well-being, fatigue, technostress, overspending). At the time of writing, interventions involving digital detox are already providing encouraging evidence on the reversibility of harmful influences (e.g., Allcott et al. 2020; Brailovskaia et al. 2020).

Acknowledgements

Financial support of Alexander Benlian and Oliver Hinz by the Zentrum für Verantwortungsbewusste Digita-lisierung (ZEVEDI/Center for Responsible Digitalization) is gratefully acknowledged. Financial support of Alexander Benlian by the Deutsche Forschungsgemeinschaft (DFG/German Research Foundation; grant award numbers: BE 4308/5–1 and BE 4308/6–1) is gratefully acknowledged. Financial support of Henner Gimpel by the Bavarian State Ministry for Science and Arts in the research network ForDigitHealth is gratefully acknowledged. This work has been also partly funded by the Federal Ministry of Education and Research of Germany (BMBF) under grant no. 16DII116 and 16DII127 ("Deutsches Internet-Institut", Annika Baumann, Antonia Köster and Hanna Krasnova) Financial support for Marten Risius from The University of Queensland School of Business in the Research Start-up Support Funding is gratefully acknowledged. Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government.

Contributor Information

Sarah Spiekermann, Email: sspieker@wu.ac.at.

Oliver Hinz, Email: ohinz@wiwi.uni-frankfurt.de.

References

  1. Abramova O, Wagner A, Krasnova H, Buxmann P (2017) Understanding self-disclosure on social networking sites - a literature review. In: 22nd Americas conference on information systems. Boston, pp 1–10
  2. AbuJarour S, Krasnova H (2017) Understanding the role of ICTs in promoting social inclusion: the case of Syrian refugees in Germany. In: Proceedings of the 25th European conference on information systems. Guimarães, pp 1792–1806
  3. AbuJarour S, Köster A, Krasnova H, Wiesche M (2021) Technology as a source of power: exploring how ICT use contributes to the social inclusion of refugees in Germany. In: Proceedings of the 54th Hawaii international conference on system sciences. A virtual AIS conference, pp 2637–2646
  4. Acxiom (2018) Annual Report 2018. In: Annu. Rep. https://www.annualreports.com/HostedData/AnnualReports/PDF/NASDAQ_ACXM_2018.pdf. Accessed 19 Nov 2021
  5. Adam M, Werner D, Wendt C, Benlian A. Containing COVID-19 through physical distancing: the impact of real-time crowding information. Eur J Inf Syst. 2020;29:595–607. doi: 10.1080/0960085X.2020.1814681. [DOI] [Google Scholar]
  6. Agogo D. Invisible market for online personal data: an examination. Electron Mark. 2020 doi: 10.1007/s12525-020-00437-0. [DOI] [Google Scholar]
  7. Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017;31:211–236. doi: 10.1257/jep.31.2.211. [DOI] [Google Scholar]
  8. Allcott H, Braghieri L, Eichmeyer S, Gentzkow M. The welfare effects of social media. Am Econ Rev. 2020;110:629–676. doi: 10.1257/aer.20190658. [DOI] [Google Scholar]
  9. Aristotle . Nichomachean Ethics. Cambridge: Cambridge University Press; 2000. [Google Scholar]
  10. Anderson M, Anderson SL. Machine ethics. New York: Cambridge University Press; 2011. [Google Scholar]
  11. Appel M, Marker C, Gnambs T. Are social media ruining our lives? A review of meta-analytic evidence. Rev Gen Psychol. 2020;24:60–74. doi: 10.1177/1089268019880891. [DOI] [Google Scholar]
  12. Apuke OD, Omar B. Fake news and COVID-19: modelling the predictors of fake news sharing among social media users. Telemat Inform. 2021;56:101475. doi: 10.1016/j.tele.2020.101475. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Awad E, Dsouza S, Kim R, et al. The Moral Machine experiment. Nature. 2018;563:59–64. doi: 10.1038/s41586-018-0637-6. [DOI] [PubMed] [Google Scholar]
  14. Bao C, Bardhan IR, Signh H, et al. Patient-provider engagement and its impact on health outcomes: a longitudinal study of patient portal use. MIS Q. 2020;44:699–723. doi: 10.25300/MISQ/2020/14180. [DOI] [Google Scholar]
  15. Barberá P, Jost JT, Nagler J, et al. Tweeting from left to right: is online political communication more than an echo chamber? Psychol Sci. 2015;26:1531–1542. doi: 10.1177/0956797615594620. [DOI] [PubMed] [Google Scholar]
  16. Baum K, Köster A, Krasnova H, Tarafdar M (2020) Living in a world of plenty? How social network sites use distorts perceptions of wealth inequality. In: Proceedings of the 28th European Conference on Information Systems. A virtual AIS conference, pp 1–16
  17. Benbasat I, Wang W. Trust in and adoption of online recommendation agents. J Assoc Inf Syst. 2005;6:72–101. [Google Scholar]
  18. Benlian A. A daily field investigation of technology-driven stress spillovers from work to home. MIS Q. 2020;44:1259–1300. doi: 10.25300/MISQ/2020/14911. [DOI] [Google Scholar]
  19. Beyens I, Pouwels J, van Driel II, et al. Social media use and adolescents’ well-being: developing a typology of person-specific effect patterns. Commun Res. 2020 doi: 10.1177/00936502211038196. [DOI] [Google Scholar]
  20. Bimpikis K, Crapis D, Tahbaz-Salehi A. Information sale and competition. Manag Sci. 2019;65:2646–2664. doi: 10.1287/mnsc.2018.3068. [DOI] [Google Scholar]
  21. Bolier L, Haverman M, Westerhof GJ, et al. Positive psychology interventions: a meta-analysis of randomized controlled studies. BMC Public Health. 2013;13:1–20. doi: 10.1186/1471-2458-13-119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Brailovskaia J, Ströse F, Schillack H, Margraf J. Less Facebook use – more well-being and a healthier lifestyle? An experimental intervention study. Comput Hum Behav. 2020;108:106332. doi: 10.1016/j.chb.2020.106332. [DOI] [Google Scholar]
  23. Brandtzaeg PB, Følstad A, Chaparro Domínguez MÁ. How journalists and social media users perceive online fact-checking and verification services. J Pract. 2018;12:1109–1129. doi: 10.1080/17512786.2017.1363657. [DOI] [Google Scholar]
  24. Brennen JS, Nielsen RK (2020) COVID–19 has intensified concerns about misinformation. Here’s what our past research says about these issues. In: Reuters Inst. https://reutersinstitute.politics.ox.ac.uk/risj-review/covid-19-has-intensified-concerns-about-misinformation-heres-what-our-past-research. Accessed 19 Nov 2021
  25. Bright LF, Kleiser SB, Grau SL. Too much Facebook? An exploratory examination of social media fatigue. Comput Hum Behav. 2015;44:148–155. doi: 10.1016/j.chb.2014.11.048. [DOI] [Google Scholar]
  26. Bringula RP, Catacutan AE, Garcia MB, et al. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2021 doi: 10.1080/19331681.2021.1945988. [DOI] [Google Scholar]
  27. Bryanov K, Vziatysheva V. Determinants of individuals’ belief in fake news: a scoping review determinants of belief in fake news. PLoS ONE. 2021;16:e0253717. doi: 10.1371/journal.pone.0253717. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Burrell J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016;3:1–12. doi: 10.1177/2053951715622512. [DOI] [Google Scholar]
  29. Burton-Jones A, Volkoff O. How can we develop contextualized theories of effective use? A demonstration in the context of community-care electronic health records. Inf Syst Res. 2017;28:468–489. doi: 10.1287/isre.2017.0702. [DOI] [Google Scholar]
  30. Califf CB, Sarker S, Sarker S. The bright and dark sides of technostress: a mixed-methods study involving healthcare IT. MIS Q. 2020;44:809–856. doi: 10.25300/MISQ/2020/14818. [DOI] [Google Scholar]
  31. Callahan D. The WHO definition of ‘health’. Hastings Cent Stud. 1973;1:77–87. doi: 10.2307/3527467. [DOI] [PubMed] [Google Scholar]
  32. Calvillo DP, Garcia RJB, Bertrand K, Mayers TA. Personality factors and self-reported political news consumption predict susceptibility to political fake news. Personal Individ Differ. 2021;174:110666. doi: 10.1016/j.paid.2021.110666. [DOI] [Google Scholar]
  33. Calvo RA, Peters D. Positive computing: technology for well-being and human potential. Cambridge: MIT Press; 2014. [Google Scholar]
  34. Carlson A. The need for transparency in the age of predictive sentencing algorithms. Iowa Law Rev. 2017;103:303–329. [Google Scholar]
  35. Chen A, Karahanna E. Life interrupted: the effects of technology-mediated work interruptions on work and nonwork outcomes. MIS Q. 2018;42:1023–1042. doi: 10.25300/MISQ/2018/13631. [DOI] [Google Scholar]
  36. Chen L, Baird A, Straub DW. An analysis of the evolving intellectual structure of health information systems research in the information systems discipline. J Assoc Inf Syst. 2019;20:1023–1074. [Google Scholar]
  37. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep. 2016;6:24454. doi: 10.1038/srep24454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Choi JK, Ji YG. Investigating the importance of trust on adopting an autonomous vehicle. Int J Hum-Comput Interact. 2015;31:692–702. doi: 10.1080/10447318.2015.1070549. [DOI] [Google Scholar]
  39. Clarke R. The digital persona and its application to data surveillance. Inf Soc. 1994;10:77–92. doi: 10.1080/01972243.1994.9960160. [DOI] [Google Scholar]
  40. Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020;32(1):35–52. doi: 10.1287/isre.2019.0910. [DOI] [Google Scholar]
  41. Conway M. Determining the role of the internet in violent extremism and terrorism: six suggestions for progressing research. Stud Confl Terror. 2017;40:77–98. doi: 10.1080/1057610X.2016.1157408. [DOI] [Google Scholar]
  42. Conway M, Macdonald S. Introduction to the special issue: Islamic state’s online activity and responses, 2014–2017. Stud Confl Terror. 2019;42:1–4. doi: 10.1080/1057610X.2018.1513684. [DOI] [Google Scholar]
  43. Cowgill B (2018) The impact of algorithms on judicial discretion: evidence from regression discontinuities. Working Paper
  44. Crockett MJ. Moral outrage in the digital age. Nat Hum Behav. 2017;1:769–771. doi: 10.1038/s41562-017-0213-3. [DOI] [PubMed] [Google Scholar]
  45. Cunningham S, Hudson CC, Harkness K. Social media and depression symptoms: a meta-analysis. Res Child Adolesc Psychopathol. 2021;49:241–253. doi: 10.1007/s10802-020-00715-7. [DOI] [PubMed] [Google Scholar]
  46. Curchod C, Patriotta G, Cohen L, Neysen N. Working for an algorithm: power asymmetries and agency in online work settings. Adm Sci Q. 2020;65:644–676. doi: 10.1177/0001839219867024. [DOI] [Google Scholar]
  47. Davenport T, Beck J. The attention economy: understanding the new currency of business. Boston: Harvard Business Review Press; 2001. [Google Scholar]
  48. Demirci K, Akgönül M, Akpinar A. Relationship of smartphone use severity with sleep quality, depression, and anxiety in university students. J Behav Addict. 2015;4:85–92. doi: 10.1556/2006.4.2015.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Deng X, Joshi KD, Galliers RD. The duality of empowerment and marginalization in microtask crowdsourcing: giving voice to the less powerful through value sensitive design. MIS Q. 2016;40:279–302. doi: 10.25300/MISQ/2016/40.2.01. [DOI] [Google Scholar]
  50. Diener E. Subjective well-being. Psychol Bull. 1984;95:542–575. doi: 10.1037/0033-2909.95.3.542. [DOI] [PubMed] [Google Scholar]
  51. Diener E, Suh EM, Lucas RE, Smith HL. Subjective well-being: three decades of progress. Psychol Bull. 1999;125:276–302. doi: 10.1037/0033-2909.125.2.276. [DOI] [Google Scholar]
  52. Dietvorst BJ, Simmons JP, Massey C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen. 2015;144:114–126. doi: 10.1037/xge0000033. [DOI] [PubMed] [Google Scholar]
  53. Dinev T, McConnell AR, Smith HJ. Research commentary – informing privacy research through information systems, psychology, and behavioral economics: thinking outside the “APCO” box. Inf Syst Res. 2015;26:639–655. doi: 10.1287/isre.2015.0600. [DOI] [Google Scholar]
  54. Ellison NB, Steinfield C, Lampe C. The benefits of Facebook “friends:” social capital and college students’ use of online social network sites. J Comput-Mediat Commun. 2007;12:1143–1168. doi: 10.1111/j.1083-6101.2007.00367.x. [DOI] [Google Scholar]
  55. Ensign D, Friedler S, Neville S, et al (2017) Runaway feedback loops in predictive policing. ArXiv Prepr. ArXiv170609847
  56. Eubanks V. Automating inequality: how high-tech tools profile, police, and punish the poor. New York: St. Martin’s; 2018. [Google Scholar]
  57. Facebook (2021) Facebook ad center, detailed targeting options. In: Facebook. https://www.facebook.com/ad_center/create/pagead/?entry_point=fb4b_create_ad_cta&page_id=864331173712397. Accessed 21 Aug 2021
  58. Faelens L, Hoorelbeke K, Soenens B, et al. Social media use and well-being: a prospective experience-sampling study. Comput Hum Behav. 2021;114:106510. doi: 10.1016/j.chb.2020.106510. [DOI] [Google Scholar]
  59. Felmingham CM, Adler NR, Ge Z, et al. The importance of incorporating human factors in the design and implementation of artificial intelligence for skin cancer diagnosis in the real world. Am J Clin Dermatol. 2021;22:233–242. doi: 10.1007/s40257-020-00574-4. [DOI] [PubMed] [Google Scholar]
  60. Felzmann H, Villaronga E, Lutz C, Tamò-Larrieux A. Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019;6:1–14. doi: 10.1177/2053951719860542. [DOI] [Google Scholar]
  61. Fisher A, Prucha N, Winterbotham E. Mapping the Jihadist information ecosystem: towards the next generation of disruption capability. London: Royal United Services Institute for Defence and Security Studies; 2019. [Google Scholar]
  62. Frankena WK. Ethics. 2. Englewood Cliffs: Prentice Hall; 1973. [Google Scholar]
  63. Frauenberger C, Good J, Fitzpatrick G, Iversen OS. In pursuit of rigour and accountability in participatory design. Int J Hum-Comput Stud. 2015;74:93–106. doi: 10.1016/j.ijhcs.2014.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Freude H, Heger O, Niehaves B (2019) Unveiling emotions: attitudes towards affective technology. In: Proceedings of the 40th International conference on information systems. Munich, pp 1–18
  65. Friedman B, Kahn P. Human values, ethics, and design. In: Jacko J, Sears A, editors. The Human-computer interaction handbook. Mahwah: Lawrence Erlbaum; 2003. [Google Scholar]
  66. Fruhwirth M, Rachinger M, Prlja E (2020) Discovering business models of data marketplaces. In: Proceedings of the 53rd Hawaii international conference on system sciences. Hawaii, pp 5736–5747
  67. Gimpel H, Schröder J, editors. Hospital 4.0: Schlanke, digital-unterstützte Logistikprozesse in Krankenhäusern. Wiesbaden: Springer; 2021. [Google Scholar]
  68. Gimpel H, Lanzl J, Regal C, et al. Gesund digital arbeiten?! Eine Studie zu digitalem Stress in Deutschland. Augsburg: Projektgruppe Wirtschaftsinformatik des Fraunhofer FIT; 2019. [Google Scholar]
  69. Gimpel H, Manner-Romberg T, Schmied F, Winkler TJ. Understanding the evaluation of mHealth app features based on a cross-country Kano analysis. Electron Mark Online Ahead Print. 2021 doi: 10.1007/s12525-020-00455-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Gimpel H, Schmied F (2019) Risks and side effects of digitalization: a multi-level taxonomy of the adverse effects of using digital technologies and media. In: Proceedings of the 27th European conference on information systems. Stockholm, pp 1–15
  71. Gkatzelis V, Aperjis C, Huberman BA. Pricing Private Data Electron Mark. 2015;25:109–123. doi: 10.1007/s12525-015-0188-8. [DOI] [Google Scholar]
  72. Glikson E, Woolley AW. Human trust in artificial intelligence: review of empirical research. Acad Manag Ann. 2020;14:627–660. doi: 10.5465/annals.2018.0057. [DOI] [Google Scholar]
  73. große Deters F, Mehl MR, Does posting Facebook status updates increase or decrease loneliness? An online social networking experiment. Soc Psychol Personal Sci. 2013;4:579–586. doi: 10.1177/1948550612469233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. HLEG of the EU Commission (2020) Assessment list for trustworthy AI (ALTAI). Brussels
  75. Hosseini M, Shahri A, Phalp K, Ali R. Four reference models for transparency requirements in information systems. Requir Eng. 2018;23:251–275. doi: 10.1007/s00766-017-0265-y. [DOI] [Google Scholar]
  76. Hou Y, Xiong D, Jiang T, et al. Social media addiction: its impact, mediation, and intervention. Cyberpsychol J Psychosoc Res Cyberspace. 2019;13(1):4. doi: 10.5817/CP2019-1-4. [DOI] [Google Scholar]
  77. Huang C. Time spent on social network sites and psychological well-being: a meta-analysis. Cyberpsychol Behav Soc Netw. 2017;20:346–354. doi: 10.1089/cyber.2016.0758. [DOI] [PubMed] [Google Scholar]
  78. Huang Q, Li Y, Huang S, et al. Smartphone use and sleep quality in Chinese college students: a preliminary study. Front Psychiatry. 2020;11:352. doi: 10.3389/fpsyt.2020.00352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. IEEE. (2021). IEEE 7000 - Model process for addressing ethical concerns during system design. In. Piscataway: IEEE computer society. https://engagestandards.ieee.org/ieee-7000-2021-for-systems-design-ethical-concerns.html. Accessed 19 Nov 2021
  80. Jakobi T, von Grafenstein M, Legner C, et al. The role of IS in the conflicting interests regarding GDPR. Bus Inf Syst Eng. 2020;62:261–272. doi: 10.1007/s12599-020-00633-4. [DOI] [Google Scholar]
  81. Jobin A, Ienca M, Vayena E. The global landscape for AI ethics guidelines. Nat Mach Intell. 2019;1:389–399. doi: 10.1038/s42256-019-0088-2. [DOI] [Google Scholar]
  82. Jussupow E, Benbasat I, Heinzl A (2020) Why are we averse towards Algorithms? A comprehensive literature review on algorithm aversion. In: Proceedings of the 28th European Conference on Information Systems. A virtual AIS conference, pp 1–16
  83. Kahneman D, Diener E, Schwarz N, editors. Well-being: foundations of hedonic psychology. Russell Sage; 1999. [Google Scholar]
  84. Kanno-Youngs Z, Sanger DE (2021) Extremists emboldened by Capitol attack pose rising threat, Homeland Security says. N. Y. Times. https://www.nytimes.com/2021/01/27/us/politics/homeland-security-threat.html. Accessed 19 Nov 2021
  85. Karwatzki S, Trenz M, Tuunainen VK, Veit D. Adverse consequences of access to individuals’ information: an analysis of perceptions and the scope of organisational influence. Eur J Inf Syst. 2017;26:688–715. doi: 10.1057/s41303-017-0064-z. [DOI] [Google Scholar]
  86. Kastl J, Pagnozzi M, Piccolo S. Selling information to competitive firms. RAND J Econ. 2018;49:254–282. doi: 10.1111/1756-2171.12226. [DOI] [Google Scholar]
  87. Kellogg KC, Valentine MA, Christin A. Algorithms at work: the new contested terrain of control. Acad Manag Ann. 2020;14:366–410. doi: 10.5465/annals.2018.0174. [DOI] [Google Scholar]
  88. Keyes CLM. Social well-being. Soc Psychol Q. 1998;61:121–140. doi: 10.2307/2787065. [DOI] [Google Scholar]
  89. Kilger M. The digital individual. Inf Soc. 1994;10:93–99. doi: 10.1080/01972243.1994.9960161. [DOI] [Google Scholar]
  90. Kim A, Dennis AR. Says who? The effects of presentation format and source rating on fake news in social media. MIS Q. 2019;43:1025–1039. doi: 10.25300/MISQ/2019/15188. [DOI] [Google Scholar]
  91. Kim JW, Ryu B, Cho S, et al. Impact of personal health records and wearables on health outcomes and patient response: three-arm randomized controlled trial. JMIR MHealth UHealth. 2019;7:e12070. doi: 10.2196/12070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Kim TW, Routledge BR (2018) Informational privacy, a right to explanation, and interpretable AI. In: 2018 IEEE symposium on privacy-aware computing. pp 64–74
  93. Kirchhof G, Lindner JF, Achenbach S, et al. Stratified prevention: opportunities and limitations. Clin Res Cardiol. 2018;107:193–200. doi: 10.1007/s00392-017-1186-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Kitchens B, Johnson SL, Gray P. Understanding echo chambers and filter bubbles: the impact of social media on diversification and partisan shifts in news consumption. MIS Q. 2020;44:1–32. doi: 10.25300/MISQ/2020/16371. [DOI] [Google Scholar]
  95. Kizilcec RF (2016) How much information? Effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI conference on human factors in computing systems. pp 2390–2395
  96. Klein AZ, Magge A, O’Connor K, et al. Toward using Twitter for tracking COVID-19: a natural language processing pipeline and exploratory data set. J Med Internet Res. 2021;23:e25314. doi: 10.2196/25314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Kleinberg J, Lakkaraju H, Leskovec J, et al. Human decisions and machine predictions. Q J Econ. 2017;133:237–293. doi: 10.1093/qje/qjx032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Koroleva K, Krasnova H, Veltri NF, Günther O (2011) It’s all about networking! Empirical investigation of social capital formation on social network sites. In: International conference on information systems. Shanghai, pp 1–20
  99. Krämer J, Schnurr D, Wohlfarth M. Winners, losers, and Facebook: the role of social logins in the online advertising ecosystem. Manag Sci. 2019;65:1678–1699. doi: 10.1287/mnsc.2017.3012. [DOI] [Google Scholar]
  100. Krasnova H, Abramova O, Baumann A, Notter I (2016) Why phubbing is toxic for your relationship: understanding the role of smartphone jealousy among “Generation Y” users. In: European conference on information systems. İstanbul, pp 1–20
  101. Kross E, Verduyn P, Demiralp E, et al. Facebook use predicts declines in subjective well-being in young adults. PLoS ONE. 2013;8:e69841. doi: 10.1371/journal.pone.0069841. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Kross E, Verduyn P, Sheppes G, et al. Social media and well-being: pitfalls, progress, and next steps. Trends Cogn Sci. 2021;25:55–66. doi: 10.1016/j.tics.2020.10.005. [DOI] [PubMed] [Google Scholar]
  103. Laato S, Islam AN, Islam MN, Whelan E. What drives unverified information sharing and cyberchondria during the COVID-19 pandemic? Eur J Inf Syst. 2020;29:288–305. doi: 10.1080/0960085X.2020.1770632. [DOI] [Google Scholar]
  104. Lambrecht A, Tucker C. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Manag Sci. 2019;65:2966–2981. doi: 10.1287/mnsc.2018.3093. [DOI] [Google Scholar]
  105. Lazer D. The rise of the social algorithm. Science. 2015;348:1090–1091. doi: 10.1126/science.aab1422. [DOI] [PubMed] [Google Scholar]
  106. Lazer DM, Baum MA, Benkler Y, et al. The science of fake news. Science. 2018;359:1094–1096. doi: 10.1126/science.aao2998. [DOI] [PubMed] [Google Scholar]
  107. Lee MK. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018;5:1–16. doi: 10.1177/2053951718756684. [DOI] [Google Scholar]
  108. Lepp A, Barkley JE, Karpinski AC. The relationship between cell phone use, academic performance, anxiety, and satisfaction with life in college students. Comput Hum Behav. 2014;31:343–350. doi: 10.1016/j.chb.2013.10.049. [DOI] [Google Scholar]
  109. Levitin A. Introduction to the design & analysis of algorithms. Addison-Wesley; 2003. [Google Scholar]
  110. Liberini F, Russo A, Cuevas Á, Cuevas R. Politics in the Facebook era - evidence from the 2016 US presidential elections. Munich: Center for Economic Studies and ifo Institute; 2020. [Google Scholar]
  111. Liu D, Baumeister RF, Yang C. Digital communication media use and psychological well-being: a meta-analysis. J Comput-Mediat Commun. 2019;24:259–274. doi: 10.1093/jcmc/zmz013. [DOI] [Google Scholar]
  112. Liv N, Greenbaum D. Deep fakes and memory malleability: false memories in the service of fake news. AJOB Neurosci. 2020;11:96–104. doi: 10.1080/21507740.2020.1740351. [DOI] [PubMed] [Google Scholar]
  113. Logg JM, Minson JA, Moore DA. Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Decis Process. 2019;151:90–103. doi: 10.1016/j.obhdp.2018.12.005. [DOI] [Google Scholar]
  114. Majchrzak A, Markus ML (2012) Technology affordances and constraints in management information systems (MIS). In: Kessler E (ed). Encyclopedia of management theory. Sage, Forthcoming, USA
  115. Mann G, O’Neil C (2016) Hiring algorithms are not neutral, https://hbr.org/2016/12/hiring-algorithms-are-not-neutral. Accessed 21 Aug 2021
  116. Martel C, Pennycook G, Rand DG. Reliance on emotion promotes belief in fake news. Cogn Res Princ Implic. 2020;5:47. doi: 10.1186/s41235-020-00252-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Martin K. Ethical implications and accountability of algorithms. J Bus Ethics. 2019;160:835–850. doi: 10.1007/s10551-018-3921-3. [DOI] [Google Scholar]
  118. Matook S, Cummings J, Bala H. Are you feeling lonely? The impact of relationship characteristics and online social network features on loneliness. J Manag Inf Syst. 2015;31:278–310. doi: 10.1080/07421222.2014.1001282. [DOI] [Google Scholar]
  119. McAfee A, Brynjolfsson E, Davenport TH, et al. Big data: the management revolution. Harv Bus Rev. 2012;90:60–68. [PubMed] [Google Scholar]
  120. McKnight DH, Cummings LL, Chervany NL. Initial trust formation in new organizational relationships. Acad Manage Rev. 1998;23:473–490. doi: 10.2307/259290. [DOI] [Google Scholar]
  121. McKnight DH, Carter M, Thatcher J, Clay P. Trust in a specific technology. ACM Trans Manag Inf Syst TMIS. 2011;2:1–25. doi: 10.1145/1985347.1985353. [DOI] [Google Scholar]
  122. Mei X, Lee H, Diao K. Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020;26:1224–1228. doi: 10.1038/s41591-020-0931-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Melendez S, Pasternack A (2019) Here are the data brokers quietly buying and selling your personal information. In: Fast Co. https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information. Accessed 20 Jan 2020
  124. Mittelstadt B. Principles alone cannot guaranteee ethical AI. Nat Mach Intell. 2019;1:501–507. doi: 10.1038/s42256-019-0114-4. [DOI] [Google Scholar]
  125. Möhlmann M, Zalmanzon L, Henfridsson O, Gregory RW (2021) Algorithmic management of work on online labor platforms: when matching meets control. MIS Q, Forthcoming
  126. Monar J. Common threat and common response? The EU’s counter-terrorism strategy and its problems. Gov Oppos. 2007;42:292–313. doi: 10.1111/j.1477-7053.2007.00225.x. [DOI] [Google Scholar]
  127. Montgomery KC. Youth and surveillance in the Facebook era: policy interventions and social implications. Telecommun Policy. 2015;39:771–786. doi: 10.1016/j.telpol.2014.12.006. [DOI] [Google Scholar]
  128. Newman J (2020) This AI fact-checking startup is doing what Facebook and Twitter won’t. In: Fast Co. https://www.fastcompany.com/90535520/this-ai-fact-checking-startup-is-doing-what-facebook-and-twitter-wont. Accessed 27 Aug 2021
  129. Nordenfelt L. Quality of life, health and happiness. Aldershot: Averbury; 1993. [Google Scholar]
  130. Nouri L, Lorenzo-Dus N, Watkin A-L. Following the whack-a-mole: Britain First’s visual strategy from Facebook to Gab. London: Royal United Services Institute for Defence and Security Studies; 2019. [Google Scholar]
  131. Nuraniyah N. The evolution of online violent extremism in Indonesia and the Philippines. London: Royal United Services Institite for Defence and Security Studies; 2019. [Google Scholar]
  132. O’Neil C. Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown; 2016. [Google Scholar]
  133. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Sci. 2019;366:447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
  134. Parra-Arnau J. Optimized, direct sale of privacy in personal data marketplaces. Inf Sci. 2018;424:354–384. doi: 10.1016/j.ins.2017.10.009. [DOI] [Google Scholar]
  135. Parry DA, Davidson BI, Sewall CJ, et al. A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nat Hum Behav. 2021 doi: 10.1038/s41562-021-01117-5. [DOI] [PubMed] [Google Scholar]
  136. Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, London
  137. Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020;66:4944–4957. doi: 10.1287/mnsc.2019.3478. [DOI] [Google Scholar]
  138. Pirkkalainen H, Salo M (2016) Two decades of the dark side in the information systems basket: suggesting five areas for future research. In: European conference on information systems. Istanbul, pp 1–16
  139. Polonski V (2018) AI is convicting criminals and determining jail time, but is it fair? https://www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair. Accessed 21 Aug 2021
  140. Prahl A, Van Swol L. Understanding algorithm aversion: when is advice from automation discounted? J Forecast. 2017;36:691–702. doi: 10.1002/for.2464. [DOI] [Google Scholar]
  141. Qureshi I, Bhatt B, Gupta S, Tiwari AA (2020) Call for papers: Causes, symptoms and consequences of social media induced polarization (SMIP). Inf Syst J 1–11
  142. Rahman HA, Valentine MA. How managers maintain control through collaborative repair: evidence from platform-mediated “Gigs”. Organ Sci. 2021;32:1149–1390. doi: 10.1287/orsc.2021.1428. [DOI] [Google Scholar]
  143. Rahwan I, Cebrian M, Obradovich N, et al. Machine behaviour. Nature. 2019;568:477–486. doi: 10.1038/s41586-019-1138-y. [DOI] [PubMed] [Google Scholar]
  144. Rai A. Explainable AI: from black box to glass box. J Acad Mark Sci. 2020;48:137–141. doi: 10.1007/s11747-019-00710-5. [DOI] [Google Scholar]
  145. Rimol M. Gartner forecasts global spending on wearable devices to total $81.5 billion in 2021. Stamford: Gartner; 2021. [Google Scholar]
  146. Rissler R, Nadj M, Li MX, et al. To be or not to be in flow at work: physiological classification of flow using machine learning. IEEE Trans Affect Comput. 2020 doi: 10.1109/TAFFC.2020.3045269. [DOI] [Google Scholar]
  147. Rolnick D, Donti PL, Kaack LH, et al (2019) Tackling climate change with machine learning. ArXiv190605433 Cs Stat
  148. Roozenbeek J, Schneider CR, Dryhurst S, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci. 2020;7:201199. doi: 10.1098/rsos.201199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Rosenblatt A. Uberland: how algorithms are rewriting the rules of work. Oakland: University of California Press; 2018. [Google Scholar]
  150. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:206–215. doi: 10.1038/s42256-019-0048-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Ryan RM, Deci EL. On happiness and human potentials: a review of research on hedonic and eudaimonic well-being. Annu Rev Psychol. 2001;52:141–166. doi: 10.1146/annurev.psych.52.1.141. [DOI] [PubMed] [Google Scholar]
  152. Ryan RM, Huta V, Deci EL. Living well: a self-determination theory perspective on eudaimonia. J Happiness Stud. 2008;9:139–170. doi: 10.1007/s10902-006-9023-4. [DOI] [Google Scholar]
  153. Ryff CD, Keyes CLM. The structure of psychological well-being revisited. J Pers Soc Psychol. 1995;69:719–727. doi: 10.1037/0022-3514.69.4.719. [DOI] [PubMed] [Google Scholar]
  154. Sarker S, Chatterjee S, Xiao X, Elbanna A. The sociotechnical axis of cohesion for the is discipline: its historical legacy and its continued relevance. MIS Q. 2019;43:695–720. doi: 10.25300/MISQ/2019/13747. [DOI] [Google Scholar]
  155. Saunders C, Benlian A, Henfridsson O, Wiener M (2020) IS control and governance. MIS Q Res Curations 1–14
  156. Schaefer KE, Chen JY, Szalma JL, Hancock PA. A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum Factors. 2016;58:377–400. doi: 10.1177/0018720816634228. [DOI] [PubMed] [Google Scholar]
  157. Schechner S, Secada M (2019) You give apps sensitive personal information. Then they tell Facebook. In: Wall Str. J. https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636?mod=e2tw. Accessed 19 Nov 2021
  158. Schneier B. Data and Goliath: the hidden battles to collect your data and control your world. reprint. New York: Norton; 2015. [Google Scholar]
  159. Schomakers E-M, Lidynia C, Ziefle M. All of me? Users’ preferences for privacy-preserving data markets and the importance of anonymity. Electron Mark. 2020;30:649–665. doi: 10.1007/s12525-020-00404-9. [DOI] [Google Scholar]
  160. Schor JB, Attwood-Charles W, Cansoy M, et al. Dependence and precarity in the platform economy. Theory Soc. 2020;49:833–861. doi: 10.1007/s11186-020-09408-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Sharma K, Qian F, Jiang H, et al. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol. 2019;10:1–41. doi: 10.1145/3305260. [DOI] [Google Scholar]
  162. Shu K, Bhattacharjee A, Alatawi F, et al. Combating disinformation in a social media age. Wiley Interdiscip Rev Data Min Knowl Discov. 2020;10:1–39. doi: 10.1002/widm.1385. [DOI] [Google Scholar]
  163. Sindermann C, Cooper A, Montag C. A short review on susceptibility to falling for fake political news. Curr Opin Psychol. 2020;36:44–48. doi: 10.1016/j.copsyc.2020.03.014. [DOI] [PubMed] [Google Scholar]
  164. Spiekermann S. Value-based Engineering: Prinzipien und Motivation für bessere IT Systeme. Inform Spektrum. 2021;44:247–256. doi: 10.1007/s00287-021-01378-4. [DOI] [Google Scholar]
  165. Spiekermann S, Korunovska J. Towards a value theory for personal data. J Inf Technol. 2017;32:62–84. doi: 10.1057/jit.2016.4. [DOI] [Google Scholar]
  166. Spiekermann S, Acquisti A, Böhme R, Hui K-L. The challenges of personal data markets and privacy. Electron Mark. 2015;25:161–167. doi: 10.1007/s12525-015-0191-0. [DOI] [Google Scholar]
  167. Spiekermann S, Böhme R, Acquisti A, Hui K-L. Personal data markets. Electron Mark. 2015;25:91–93. doi: 10.1007/s12525-015-0190-1. [DOI] [Google Scholar]
  168. Spiekermann-Hoff S, Krasnova H, Hinz O (2021) 05/2023 – Technology for humanity. In: Bus Inf Syst Eng. https://www.bise-journal.com/?p=1940
  169. Srivastava SC, Chandra S, Shirish A. Technostress creators and job outcomes: theorising the moderating influence of personality traits. Inf Syst J. 2015;25:355–401. doi: 10.1111/isj.12067. [DOI] [Google Scholar]
  170. Statista (2021a) Number of smartphone users worldwide from 2016 to 2023. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/. Accessed 19 Nov 2021
  171. Statista (2021b) Daily time spent on social networking by internet users worldwide from 2012 to 2020. https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide/. Accessed 19 Nov 2021
  172. Sweeney L. Discrimination in online ad delivery. Queue. 2013;11:10–29. doi: 10.1145/2460276.2460278. [DOI] [Google Scholar]
  173. Tiggemann M, Zaccardo M. Exercise to be fit, not skinny": the effect of fitspiration imagery on women’s body image. Body Image. 2015;15:61–67. doi: 10.1016/j.bodyim.2015.06.003. [DOI] [PubMed] [Google Scholar]
  174. Tolmeijer S, Kneer M, Sarasua C, et al. Implementations in machine ethics: a survey. ACM Comput Surv. 2020;53:6. doi: 10.1145/3419633. [DOI] [Google Scholar]
  175. Trang S, Trenz M, Weiger WH, et al. One app to trace them all? Examining app specifications for mass acceptance of contact-tracing apps. Eur J Inf Syst. 2020;29:415–428. doi: 10.1080/0960085X.2020.1784046. [DOI] [Google Scholar]
  176. Turel O, Matt C, Trenz M, Cheung CMK. An intertwined perspective on technology and digitised individuals: linkages, needs and outcomes. Inf Syst J. 2020;30:929–939. doi: 10.1111/isj.12304. [DOI] [Google Scholar]
  177. Vaghefi I, Lapointe L, Boudreau-Pinsonneault C. A typology of user liability to IT addiction. Inf Syst J. 2017;27:125–169. doi: 10.1111/isj.12098. [DOI] [Google Scholar]
  178. Valkenburg PM, Beyens I, van Driel II, et al. Social media use and adolescents’ self-esteem: heading for a person-specific media effects paradigm. J Commun. 2021;71:56–78. doi: 10.1093/joc/jqaa039. [DOI] [Google Scholar]
  179. Valkenburg PM, van Driel II, Beyens I. The associations of active and passive social media use with well-being: a critical scoping review. PsyArXiv Prepr. 2021 doi: 10.31234/osf.io/j6xqz. [DOI] [Google Scholar]
  180. Vallas S, Schor JB. What do platforms do? Understanding the gig economy. Annu Rev Sociol. 2020;46:273–294. doi: 10.1146/annurev-soc-121919-054857. [DOI] [Google Scholar]
  181. Vallor S. Technology and the virtues – a philosophical guide to a future worth wanting. New York: Oxford University Press; 2016. [Google Scholar]
  182. van den Broek T, van Veenstra AF. Governance of big data collaborations: how to balance regulatory compliance and disruptive innovation. Technol Forecast Soc Change. 2018;129:330–338. doi: 10.1016/j.techfore.2017.09.040. [DOI] [Google Scholar]
  183. van der Aalst W, Hinz O, Weinhardt C. Big digital platforms. Bus Inf Syst Eng. 2019;61:645–648. doi: 10.1007/s12599-019-00618-y. [DOI] [Google Scholar]
  184. van Doorn N. Platform labor: on the gendered and racialized exploitation of low-income service work in the ‘on-demand’ economy. Inf Commun Soc. 2017;20:898–914. doi: 10.1080/1369118X.2017.1294194. [DOI] [Google Scholar]
  185. Vanden Abeele MMP Digital wellbeing as a dynamic construct. Commun Theory. 2020;31(4):932–955. doi: 10.1093/ct/qtaa024. [DOI] [Google Scholar]
  186. Vodanovich S, Sundaram D, Myers M. Digital natives and ubiquitous information systems. Inf Syst Res. 2010;21:711–723. doi: 10.1287/isre.1100.0324. [DOI] [Google Scholar]
  187. Volz D, Levy R (2021) Social media plays key role for domestic extremism, FBI director says. In: Wall Str. J. https://www.wsj.com/articles/social-media-is-key-amplifier-of-domestic-violent-extremism-wray-says-11618434413. Accessed 15 Oct 2021
  188. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359:1146–1151. doi: 10.1126/science.aap9559. [DOI] [PubMed] [Google Scholar]
  189. Wessels N, Gerlach J, Wagner A (2019) To sell or not to sell – antecedents of individuals’ willingness-to-sell personal information on data-selling platforms. In: Proceedings of the 40th international conference on information systems. Munich, pp 1–17
  190. Westerlund M. The emergence of deepfake technology: a review. Technol Innov Manag Rev. 2019;9:39–52. doi: 10.22215/timreview/1282. [DOI] [Google Scholar]
  191. WHO . Constitution of the World Health Organization. Geneva: World Health Organization; 1948. [Google Scholar]
  192. Wiener M, Cram WA, Benlian A, et al. Technology-mediated control legitimacy in the gig economy: conceptualization and nomological network. In: Hirschheim R, et al., editors. Information systems outsourcing. Cham: Progress in IS Springer; 2020. [Google Scholar]
  193. Wiener M, Cram A, Benlian A. Algorithmic control and gig workers: a legitimacy perspective of Uber drivers. Eur J Inf Syst Forthcom. 2022 doi: 10.1080/0960085X.2021.1977729. [DOI] [Google Scholar]
  194. Winter C, Neumann P, Meleagrou-Hitchens A, et al. Online extremism: research trends in internet activism, radicalization, and counter-strategies. Int J Confl Violence IJCV. 2020;14:1–20. [Google Scholar]
  195. Woodford A (2018) Expanding fact-checking to photos and videos – about Facebook. In: Facebook. Accessed 27 Aug 2021
  196. Zellers R, Holtzman A, Rashkin H, et al (2019) Defending against neural fake news. 1–21. arXiv:190512616
  197. Zhang X, Zhang R, Yue WT, Yu Y (2019) What is your data strategy? The strategic interactions in data-driven advertising. In: Proceedings of the 40th International conference on information systems. Munich, pp 1–9

Articles from Business & Information Systems Engineering are provided here courtesy of Nature Publishing Group

RESOURCES