Abstract
Algorithmic worker surveillance and productivity scoring tools powered by artificial intelligence (AI) are becoming prevalent and ubiquitous technologies in the workplace. These tools are applied across white and blue-collar jobs, and gig economy roles. In the absence of legal protections, and strong collective action capabilities, workers are in an imbalanced power position to challenge the practices of employers using these tools. Use of such tools undermines human dignity and human rights. These tools are also built on fundamentally erroneous assumptions. The primer section of this paper provides stakeholders (policymakers, advocates, workers, and unions) with insights into assumptions embedded in workplace surveillance and scoring technologies, how employers use these systems which impact human rights. The roadmap section lays out actionable recommendations for policy and regulatory changes which can be enacted by federal agencies and labor unions. The paper uses major policy frameworks developed or supported by the United States as the foundation for policy recommendations. These are Universal Declaration of Human Rights, the Organisation for Economic Co-operation and Development (OECD) Principles for the Responsible Stewardship of Trustworthy AI (OECD AI Principles), Fair Information Practices (FIPs) and the White House Blueprint for an AI Bill of Rights.
Keywords: Algorithmic surveillance, AI surveillance, AI governance, Human rights, Worker rights, AI bias
Introduction
The surveillance of workers and the obsession with tracking worker behavior as a means to measure productivity are not new phenomena. However, effective policy implementation and regulatory interventions for these practices are either non-existent or minimal. Some policy frameworks demand products and services to be “lawful.” However, when no law or oversight exists to protect workers against the surveillance and productivity scoring technologies, workplaces are turned “into sites of experimentation with these systems” [1].
Increasing data collection and use of artificial intelligence (AI) technologies in the workplace affects all industries. This Policy Primer and Roadmap focuses on workplace surveillance and productivity scoring tools and practices. In two sections, the Primer provides a range of stakeholders—workers, policymakers, unions, employers and vendors—with insights into these technologies, and the Roadmap lays out actionable recommendations for change.
In the following sections, the stakeholders can find:
An overview of worker surveillance and productivity scoring tools in the market, and the adoption trends by employers.
A brief history on similar employer practices; of worker surveillance and productivity scoring.
Insights on how human rights can be undermined, and how workers’ access to resources and opportunities can be limited by deployment of these tools, and
The erroneous assumptions made about these tools.
The final part provides a roadmap for stakeholders, and possibilities for different policies to be adopted.
The analysis brings together several major policy frameworks which should guide policy and regulatory decisions, especially in the United States. These are the Universal Declaration of Human Rights (UDHR) [2], the Organisation for Economic Co-operation and Development (OECD) Principles for the Responsible Stewardship of Trustworthy AI (OECD AI Principles) [3], Fair Information Practices (FIPs) and the Blueprint for an AI Bill of Rights (Blueprint) [4].
The United Nations unanimously adopted the UDHR, recognizing all individuals are entitled to inherent, inalienable set of rights. The document was drafted by a committee chaired by Eleanor Roosevelt and adopted in the General Assembly meeting in 1948.
In late 2022, the White House Office of Science and Technology announced the Blueprint which focuses on “use of surveillance technologies when juxtaposed with real-time or subsequent automated analysis and when such systems have a potential for meaningful impact on individuals’ or communities’ rights, opportunities, or access” and hence having a major impact on human rights. The Blueprint defines these technologies as “products or services marketed for or that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or retain data, identifying information, or communications concerning individuals or groups” [5]. The Blueprint also draws a parallel to 1973 report Records, Computers, and the Rights of Citizens [6].
Allen and Rotenberg summarize the origins of FIPs as “Fair Information Practices were first articulated in the United States Department of Health, Education and Welfare’s seminal 1973 report Records, Computers, and the Rights of Citizens [6]. Computer scientist Willis Ware was the chair of the Advisory Group responsible for the report and is credited with the creation of the phrase ‘fair information practices.’ The framework was immediately influential, as it provided the basis for the federal Privacy Act the following year. In most simple terms, Fair Information Practices allocate rights and responsibilities in the collection and use of personal information. Organizations that collect and use personal information take on the responsibilities, while individuals whose personal information is acquired obtain the rights. Most modern privacy law follows this structure” [7]. In the legislative process, Congress concluded the Privacy Act with eight principles: Collection limitation, Data quality, Purpose specification, Use limitation, Security safeguards, Openness, Individual Participation, and Accountability [8].
The United States contributed significantly to the development of OECD AI Principles announced in 2019 (which were subsequently also adopted as G20 Principles), and which are now endorsed by more than fifty countries. These principles include inclusive growth, sustainable development and wellbeing; human-centered values and fairness; transparency and explainability; robustness, security and safety; and accountability.
As a major policy document from the executive branch, the Blueprint brings fundamental rights and democratic values to the forefront of how we should evaluate technology and its impacts. Blueprint also highlights the importance of OECD AI Principles and FIPs. The Blueprint is a product of collaboration by different departments of the federal government. The document lays out five principles crucial to the protection of rights and democratic values. One can read the Blueprint as a call to workers:
Safe and effective systems: You should be protected from unsafe or ineffective systems. This includes protecting you from foreseeable harms from uses or impacts of automated systems.
Algorithmic discrimination protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
Data privacy: You should be protected from abusive data practices with built-in protections, and you should have choices over how data about you is used.
Notice and explanation: You should know an automated system is being used and understand how it can impact you.
Human alternatives, consideration and fallback: You should have access to appropriate human alternatives and other remedies for systems resulting in discrimination or other harms.
The primer
Overview
Digital worker surveillance refers to the use of digital tools to monitor worker activity. There are a variety of tools employers use to conduct such surveillance. In recent years databases have been released tracking the use cases for various kinds of new labor technology tools, including ones for worker surveillance and productivity improvement [9, 10]. To gain a fuller picture of what modern workplace surveillance entails, it is worth exploring three examples of such surveillance in further detail. Prodoscore is a worker monitoring software which deploys a proprietary algorithm to assign workers a daily productivity score out of 100 [11]. More specifically, Prodoscore considers various inputs such as emails sent, phone calls made, messages on company messaging apps and activity on databases. The scores are then released to managers and furthermore ranked, meaning that the managers can assess how the productivity of various workers stack up against others [12]. Another surveillance software, RemoteDesk uses facial recognition technology to monitor workers who handle sensitive information in their jobs. Such a system might be deployed for example, for workers who frequently see credit card information [13]. If the system detects that someone else is looking at the screen while a credit card is being displayed or if there is a recording device in view, it will send an alert. However, this very system can also send alerts if workers work or eat on the job. Workers delivering goods or food, workers at warehouse workstations, or those moving around within a building (such as construction or housekeeping workers) can all be subject to different kinds of surveillance. Another example is UPS retrofitting its delivery trucks with multiple sensors to track the break times drivers were taking and further facilitate the optimization of deliveries [14]. As a result of installing these sensors, UPS was able to increase the total number of packages it was delivering per day, while requiring significantly fewer drivers.
The rise of surveillance technologies has been accompanied by a growing consciousness surrounding the pervasiveness of such tools. In an era of smart devices and sensors all around us, the estimate on prevalence of worker surveillance technologies vary. However, all the available estimates make it clear that the adoption of these technologies are rapidly increasing. For instance,
A 2022 New York Times article found 8 out of 10 of America’s largest private employers use some kind of productivity tracking tools [15].
A study commissioned by ExpressVPN, in collaboration with Pollfish, surveyed 2000 employers and 2000 employees who work in a remote or hybrid capacity. 78% of employers reported using employee monitoring software to track employee performance and/or online activity [16].
Gartner research shows the number of large employers using worker tracking tools has doubled since the beginning of the pandemic to 60%, with this number expected to rise to 70% within the next three years [17].
The Wall Street Journal reported on a 2022 survey conducted by the research group International Data Corp which found 67% of North American employers with at least 500 employees deploy employee monitoring software [18].
Zety study on workplace surveillance shows 85% employers use workplace surveillance [19].
Top 10 VPN report shows global demand for employee surveillance software increased by 78% in January 2022 [20].
Digital.com survey found 60% of companies employing remote workers use tracking software. This survey released in January of 2022 polled 1250 US employers on their use of surveillance technologies [21]. Moreover, 88% of companies terminated workers after implementing the surveillance software. This survey also sheds light on the particular kinds of tracking functions these employers are deploying: 76% use tools to track web browsing and application use, 60% capture random screenshots, 54% block specific content and 44% track keystrokes. However, the rate of usage varies by industry and is perhaps unsurprisingly the highest in either low-wage industrial settings or in employment relationships with billable hour work models, such as advertising and marketing (83% of companies reported using surveillance tools), construction (71%) and business and finance (60%).
Employers provide a wide range of justifications for surveillance technologies, most of which center around better control of workers and improving productivity. For example, employers claim the surveillance and productivity scoring tools are essential in enabling them to identify both productive and unproductive workers [15]. Again, the Digital.com survey is helpful in accessing employer motivations. When asked why they use surveillance software the top reasons employers provided concerned understanding how workers use their time (79%), confirming that workers are working full days (65%) and ensuring that work equipment is not being used for personal use (50%) [21].
History of worker surveillance
Employers have been interested in surveilling and improving worker productivity since arguably the beginning of the market revolution. The market revolution which began in the early nineteenth century was a pivotal moment in industrial history, as it was one of the first times when an increasing number of Americans went to work in factories for employers. Work in factories governed by shifts, production quotas and clocks is fundamentally more regimented and controlled than the work Americans were previously doing on farms or in trade shops. Many historians have argued this period bore witness to some of the most fundamental changes in US history [22].
When factory work became much more firmly established, employers were interested in more pointed questions about how to heighten productivity through closely tracking worker activity. “The marriage between work, the hour, and pay became standard within the factory” [23]. Individuals like Frederick Winslow Taylor, one of the world’s first management consultants, authored The Principles of Scientific Management and laid out a precise vision for ways workers could be better managed, monitored and controlled for the sake of improving productivity [24]. There are likewise anecdotes from this era, of Henry Ford pacing on factory floors with a stopwatch to push worker efficiency, or hiring private investigators to gain a sense of how the personal lives of his workers could potentially impede productivity [14].
The desire on behalf of employers of tracking for the sake of productivity is not unique to the twenty-first century. However, it is necessary to note the development of new technologies has made the task of surveillance much more possible than it once was. After all, there is a natural upper limit to the human capital any company can deploy for the purposes of supervision. Returning to the example of Henry Ford, there can only be so many factory-floor managers or private investigators hired and deployed. New technologies, however, make it both easier and more cost effective to track workers. Cameras and sensors can be installed practically anywhere. Other connected devices (such as smartphones, wearables, laptops, etc.) necessary to conduct work can be simultaneously used as surveillance devices.
Two additional trends have further heightened the use of surveillance tools. First, in the last couple of decades an increasing number of Americans started telecommuting or working remotely. For instance, from 2005 to 2012, 79% more American workers started telecommuting [14]. Increasing number of online platforms and applications also gave rise to gig workers. This number of course then skyrocketed as a result of the COVID-19 pandemic [25]. During the pandemic many employers pivoted towards remote work. Even as the pandemic controls ease, remote work or hybrid arrangements continue as common practices. Given that less workers are now working in actual offices, companies doubled down on surveillance tools as a means of regaining the control they once maintained in the office.
Worker surveillance, productivity scoring and impact on fundamental rights
Developers of AI and algorithmic systems promise these systems benefit organizations by increasing the efficiency, effectiveness, and scalability of processes, “streamlining and redefining” the workplace, reducing costs and standardizing application of rules—and hence improve profitability [15]. However, there is also a group of AI applications available in the market, which promise employers different capabilities to track, monitor and assess their workers. The power to have an all-seeing eye over the workforce is attractive to many employers. These technologies were once more prevalent in factory settings. The gaze of algorithmic surveillance was and is still disproportionately on low-income workers., The workers are now spread across different work contexts such as logistics, hospitality, food and service delivery jobs, or gig workers in online platform companies [26]. These jobs are then disproportionately held by workers of color. However, the reduced costs and improved capabilities for data collection, processing and retention also allows surveillance to extend to pink- and white-collar workers too. Worker surveillance is becoming a common phenomenon across all workplaces. When surveillance moves from the factory or warehouse floor to the devices workers use from other settings (such as home office, vehicles) or even carry on their bodies, surveillance becomes inseparable—it also ‘bleeds into’ workers’ private lives [11, 14]. As these workers become hyper visible through surveillance systems, the employers become more invisible behind the algorithmic decision-making systems.
Worker surveillance
It is easier to notice the physical cameras around us. However, with surveillance technology available in many shapes and forms, it is not always easy for the workers to know they are being monitored, and their data is collected. In the US, Employers can use such tools without informing their workers [27]. Workplace data collection powering surveillance can be achieved by a combination of hardware and software as listed below.
Hardware and software | Analysis | Data |
---|---|---|
Computing (laptop, tablets) Smart phones Wearables (fitness tracker, smart watch, body cam) IoTs (RFID sensors, thermal imaging, counters, WiFi routers, GPS) ID badges (fitted with microphone and accelerometer) Camera (CCTV, laptop camera activation) Screen capture Keystroke logging Call recording Voice assistant recording Biometric recognition |
Productivity tracking Risk assessment Culture fit Count of outputs Workplace analytics Insider threat Biometric ID verification for shift/workplace access |
Outputs (tasks/items/transactions/sales completed) Communication (Email, text, chat, voice, collaboration tools) Social (posts, comments, likes, 3rd party social media background checks) Engagement (calendar activity, Time spent online) Search history (browser search terms, website visited) Location (access management, geolocation tracking, geofencing) Fitness (activity, prescriptions) Login (user ID and password capture) |
Productivity scoring
“Not everything that can be counted counts. And not everything that counts can be counted.”—paraphrased from William Bruce Cameron [28].
The attraction of new technologies and vendor claims to extract actionable ‘productivity’, ‘risk’ or ‘fit’ scores for workers give rise to a variety of black-box algorithmic products in the market [29]. The products collect a plethora of data points and compare them against subjective rules to provide a score for a worker or infer certain behavioral characteristics. These scores can then be used by human managers to make determinations about the workers efficiency, productivity, risk to company’s assets and reputation. The scores also drive decisions about wages, benefits, promotions, disciplinary action or even terminations. At the extremes, these decisions can be automated and do not even require a human manager to review and validate.
When one thinks of how surveillance and scoring systems work, and how they are connected, it might be helpful to break the process down into smaller components. First, a method to track and record worker activity is necessary. Hardware, such as company-provided devices (phone, tablet, wearable fitness trackers), camera, wireless connection routers, sensors can be used to collect raw data on worker communications, online activity, movement, work outputs etc. Then, once the data is collected, an algorithmic model is necessary to analyze the collected data and make inferences or determinations about the worker behavior and performance. Developers make various design decisions on how to collect data and how to build these AI models. At times, there might be legitimate reasons to install certain data collection technology to ensure workers safety and security. Alternatively, the employer might be required by law to record worker communications. However, outside of these very limited reasons, most of these technologies are built upon problematic design decisions. The choice of the term ‘surveillance’ over ‘monitoring’ in this paper is intentional. Surveillance acknowledges the power employers hold over workers and the practices of ubiquitous collection of data on worker communications, engagement and interactions used mainly for the benefit of the employer. The data can then be used in control and manipulation of work engagement and contractual negotiations (if a contract even exists).
Vendors of these scoring systems claim the surveillance data collected can be used to infer the productivity, risk or fit of workers in relation to their roles. The claims and embedded design decisions include fundamentally erroneous assumptions, such as the ability for technology to correctly capture a human’s complex nature, or infer emotions and sentiments, or that human behavior can always be predicted. Use of surveillance and scoring technologies also infringe upon an individual's rights and freedoms. These technologies and assumptions embedded in them can be in direct contradiction with fundamental human rights [4]. Despite the impossibility of delivering on their marketing promises, these systems still find buyers among the business decision-makers.
Human dignity: Surveillance first and foremost degrades human dignity. Even if workers know about surveillance technologies, they may not have the possibility or privilege of leaving a job due to concerns over aggressive data collection or algorithmic decisions. If worker consent is requested at all, workers are asked to choose between their ability to earn a wage versus their data being collected. Their choice naturally favors employment. In such an imbalanced power situation between employer and worker, one cannot count the consent as a free or informed one. The workers lose control of privacy over their own bodies, movements, and even social interactions [30]. Who gets to draw the boundary about what is crucial information for an employer? In the absence of protections by law or organized labor, the workers are left to themselves to defend against surveillance. The boundary line is drawn ‘upon’ their bodies.
Human dignity is undermined again in the scoring systems as human complexity, engagements, aspirations, and creativity are reduced to points of data and spurious correlations. There is no longer a human story behind the interaction, nor is there the ability to ‘bring your whole self to work.’ The essence of the worker and complexity of a being and human interactions are boiled down to the data deemed important by vendors and employers, and data which can be collected.
Legal scholar Ifeoma Ajunwa also highlights that especially wearable data collection technologies may create new legal challenges such as the possibility that employer engages in unlawful surveillance (defined under National Labor Relations Act) “when it surveils employees engaged in concerted activities by observing them in a way that is ‘out of the ordinary’ and therefore coercive [31]” Such practices also undermine fundamental principles such as the Fair Information Practices, which include collection limitation, purpose specification, use limitation, accountability, security, notice, choice, and data minimization [32]. For example, data initially collected by third parties (such as fitness trackers provided by wellness or insurance companies) via the employers, can eventually be used in ways that restrict the worker’s access to resources and opportunities elsewhere [14].
Right to privacy: One of the most cited issues in worker surveillance is the infringement of privacy. The right to privacy is considered a fundamental human right. In the United Kingdom, Barclays bank faces a $1.1 billion fine over alleged monitoring of employees [33]. In Germany, data protection regulator fined electronics retailer notebooksbilliger.de for $12.6 million for using video cameras to surveil workers [34]. However, in the US, employers can collect information when workers use organization-provided devices or networks. In the absence of federal privacy regulations, a privacy regulator, or any laws limiting worker surveillance practices, the status quo allows employers to do as they see fit for their own interests. However, legal does not always mean ethical.
A recent OECD working paper AI in the workplace highlights that use of AI systems can “extend and systematize ethical failings and fundamentally change the relationship between workers and their managers” [35]. Some surveillance practices cross the line between work and private life where employers can capture very private information about the workers. For example, employers can engage in (1) social media surveillance, (2) video surveillance in the office, (3) mandate workers to use smart assistants which record conversations, or leave their laptop cameras on, (4) take screenshots of monitors at random times during the day, or (5) force workers to download mobile applications on their personal phones which continue to collect information outside of working hours. In 2022, a Dutch court ruled an employer requiring employees to keep webcams on for several hours a day and share screens, violated the right to respect for private and family life. In Germany, the data protection regulator fined retail company H&M for $41 million for the illegal surveillance of employees and keeping "excessive" records on the families, religions and illnesses of its workforce [36]. The European Court of Human Rights had a similar ruling in 2017 [37]. Such intrusion can also lead to unintentional disclosure of information protected by Title VII of the Civil Rights Act of 1964 [38] (such as sex, race, color, national origin or religion, sexual orientation, etc.) or Americans with Disabilities Act (“ADA”). Although non-discrimination regulation prevents employers from making employment decisions based on this protected information, knowledge of such information can lead to possible unconscious biases nevertheless [26, 39]. The target of surveillance shifts from the work to the worker. Notice of such protected information which would not otherwise have been known to the employer can create legal risks for the employer and opens the possibility for allegations of discrimination [40].
Right to expression: Ability to surveil a worker’s private and social interactions undermines freedom of expression. By monitoring emails, chats, phone conversations, employers can get access to workers’ thoughts—without discriminating between personal and professional communications. Knowledge of surveillance can force workers to self-filter or self-regulate their expressions and ideas. In paraphrasing Foucault’s ‘technologies of the self”, Manokha highlights the power of surveillance on individuals to self-restraint and self-discipline [41]. In this case, workers, aware of being under surveillance, may end up self-restraining themselves without any coercion or use of force by employers [42]. Employers' interest in surveilling communication also spills over to personal lives. More and more companies are interested in worker or job applicant social media accounts [43], and some even have patented audio technology to eavesdrop on conversations among workers and customers [44]. Some companies demand login access to social media accounts to enable surveillance of these accounts. In certain states where this boundary is protected by law, employers are able to continue the practice via third-party vendors. These vendors parse social media presence and interactions of both candidates and workers and provide ad hoc or ongoing risk scoring to employers. Risk scoring models can create spurious correlations, however many employers still use outcomes as third-party assessment for their employment decisions. Knowing employers can see and act upon their social media posts can prevent workers from expressing their true identities (i.e. sexual preference, religion, ability, etc.) outside of the workplace. Workers can also refrain from posting about social, economic, political, or other societal issues. This can eventually result in significant societal impacts.
Right to data protection: The data collection enabled by AI surveillance technologies is ubiquitous and pervasive. Without federal privacy legislation or robust worker protections, employers not only collect data but can also share the data further with third parties for different purposes. Workers may not access data collected about them or have any say over what the collecting entity might do with their data. Most of the time the workers may not even understand the full complexity of the data, the inferences made about them, or the extent of possible impact or harm. Both UC Berkeley Center for Labor Research and Education [1] and CoWorker.org’s [45] studies state that such data collection lacks clear and consistent safeguards. A possible breach of data can impact the worker’s access to benefits, resources and opportunities outside of the workplace.
Right to collective action and power: The nature of surveillance creates one party which makes the decision to surveil, collect data and benefit from its conclusions; and another party which gets impacted by the decision. When workers try to reduce the power imbalance through individual resistance or collective action, the data can also be used to oppress protected collective activity such as unionization or grievance. In other words, workers without protection “lack bargaining power to sufficiently fight invasive forms of surveillance, and surveillance is even being used to deter and prevent unionization [46]”.
History offers many examples of corporations hiring private investigators to surveil activities of workers to prevent collective action and break strikes [47]. A 1987 report by the United States Office of Technology Assessment, titled “The Electronic Supervisor: New Technology, New Tensions” provides a historical landscape analysis of tensions and considerations created by electronic employer surveillance systems. The report lists main concerns as privacy, fairness, and quality of work life. The factors included in fairness are listed as “reasonable standards, understanding by workers of the extent and use of the monitoring system, ability of workers to contest or correct records, and participation by workers in the design of the system.” The report makes clear there are no legal requirements in U.S. law that surveillance be “fair,” jobs be well-designed or employees be reconsulted about work standards, except insofar as these points are addressed in union contracts [48]” The report acknowledges both the low levels of unionization in the United States and how the surveillance issue has created more motivation for collective action in some previously unorganized firms.
Unfortunately, 35 years after this report, unionization rates are lower than 1987 rates, technology allows for more invasive data collection, and unions’ internal capabilities to counter these surveillance practices leave much to be desired. With the ability to collect information ubiquitously, employers can use the emerging technologies to exert power over workers. With such information disadvantage, algorithms “act as a force-multiplier for the power held by firms, with no balancing agent on the side of workers [49].” In 2021, Spain passed a law requiring online delivery platforms to inform labor unions of how algorithms affect workers’ working conditions [50].
Employers are obliged to “file ‘Surveillance Reports’ to report certain expenditures and arrangements they make in connection with labor disputes [51, 52].” The expenditures clearly include surveillance technologies and activities. However, when workers and unions are rarely aware of covert surveillance practices, it is hard to keep employers accountable for their transparency obligations or challenge fair practices. Scholars Pasquale and Citron advise “secrecy is a discriminator’s best friend: unknown unfairness can never be challenged, let alone corrected [53]. Establishing workers’ data rights under collective agreements not only protects the workers, but prevents the power of unions from diminishing [54].
Right to work and right to just and favorable remuneration: As per the Universal Declaration of Human Rights, every person has the right to work, to just and favorable conditions of work, to equal pay for equal work, and everyone who works has the right to just and favorable remuneration ensuring for themselves and their family an existence worthy of human dignity [4].
Emerging AI technologies increasingly allow previously disparate data to be connected. A great investigative journalism article by ProPublica details how a software sold to landlords can provide them with information regarding the levels of occupancy, rent amounts in their area, and the possibility to communicate with each other over the platform [55]. Whereas previously landlords had to invest significant resources to collect this kind of data individually, such platforms or technological tools now continuously allow the users access up-to-date information. Access to such information can be used to reduce competition and manage vacancies in a way to drive rent prices up beyond their market values. A parallel can be drawn here for wages and worker rights. Tools like Argyle provide aggregated workforce financial data to employers through applicant tracking systems, and to insurance providers, lenders, and credit card issuers through a single API [56]. Argyle’s vision is not only to provide financial data but “holistic view of a worker’s identity including typical hours, work trajectory, reputation and more [57].” In other words, a consolidated way for employers to see a candidate’s employment history and other compensation details before they make an offer. The asymmetrical information power means an employer can offer a less than fair wage rate or cooperate with other employers to suppress wages. Argyle claims to have profiles for more than 175 million workers, covering 80% of the US workforce [56]. While the vendor positions itself as a “third-party verification service which ‘allows’ workers to securely share their income, job title, and proof of employment information with lenders, background check companies, human resources, or any other party they choose [56]” vendor mentions nothing of massive data collection, use and future risks for workers. Some workers may become permanently locked out of employment opportunities due to the recommendations of the systems used by many employers in the industry.
When algorithmic systems become connected to each other for inputs, or the use of the aggregated systems becomes more prevalent in pre-employment decisions, a separate risk emerges. A biased, erroneous or manipulated outcome from one system becomes a direct input to another decision-making system. With such interconnected systems, workers may be locked out of affordable housing, insurance, healthcare, and similar systems [58].
Validity and black-box decisions: Vendors developing the scoring algorithms tend to make a lot of promises about the capacity of their products without disclosing how the scores are calculated, or what design decisions are made within the system. If a client demands to know the science behind the system, the house of cards may fall apart. Instead, it is a lot easier for a vendor to hide behind intellectual property (IP) protections or suggest one should trust the “neutral” technology. However, lack of vetting can expose the employer clients to liability [40]. A client should and can demand transparency. Unfortunately, since both vendors and employers benefit from these technologies in different ways, questions of scientific validity, or whether they should exist in the first place are not of priority.
Even when an employer is aware of the fact the technology is not delivering on the promise, it might still continue with the practice because it at least gives it a way to collect information about worker activity. The employer may choose to fix the issue with another level of surveillance. For example, when an AI system tracking the movements of workers in an Amazon warehouse fails, video footage is sent to other workers in India and Costa Rica. These workers provide input to improve Amazon’s machine learning tools’ accuracy for surveillance. The workers have “no idea where this particular data [is] going…and what exactly is happening in the backend.” These remote workers were also not aware that they themselves were being monitored by screen and mouse activity [59].
Right to due process: “Data-centric technologies hide, obscure, and validate employer behaviors behind an algorithm [60, 61].” Scoring can lead to automatic penalties in wages, shift distributions, and sometimes even to loss of job [15]. Without understanding how the surveillance and productivity scoring algorithms are used to make determinations about their wages, benefits or work conditions, or the unions putting in safeguards in contract clauses, “workers have few pathways to contest harmful employer decisions like discrimination and wage theft [62].” In many jurisdictions, workers also face the additional challenge of algorithms protected by intellectual property legislation. This means that even if they have the means to analyze algorithmic models, workers or unions may still not have access to them. Workers surveilled and scored by these algorithms need enhanced rights—such as right to procedural data due process [63]. In the US, ‘at will’ employment arrangements, used in most low-income jobs, allow both employers and workers to terminate the relationship at any time without having to provide a reason. However, many other employment decisions could still benefit from due process requirements.
Normative judgements: When the scoring models are created, developers make certain decisions. The decisions can include what activity to collect data on, or in other words, what behavior or activity should count towards productivity or risk. Developers make these decisions based on the technical possibility of collecting a particular set of data and what data should be accepted as a proxy to productive work. They make normative determinations about what ‘normal’ or ‘typical’ productivity should look like, then compare the data collected by workers against those norms. They decide on the labels and categorize workers into these labels. In reducing humans into standard categories, the developers also dehumanize and depersonalize the workers [64]. In making these decisions, developers also embed their own values, experiences, culture and biases into the algorithms they develop [65]. A recent New York Times article on worker productivity tracking articulates this issue as “the working world’s new clocks are just wrong: inept at capturing offline activity, unreliable at assessing hard-to-quantify tasks and prone to undermining the work itself [15].” The “choices in which factors to prioritize, or their failure to specify all relevant factors, can result in unanticipated consequences [102].”
By measuring everyone against a certain norm, and requiring similar behavior, these algorithmic systems create homogeneity. Charlie Munger, vice chairman of Berkshire Hathaway, one of the most successful business investors says, “Mimicking the herd invites regression to the mean (merely average performance) [66].” Companies globally spend significant amounts of time and resources to attract candidates with diverse backgrounds, experiences, identities and perspectives. When surveillance and scoring systems are used to determine a worker’s conformity to certain norms and behaviors, and discourage differences, the employers end up sabotaging their own efforts in the long run.
Context and cultural specificity: Just as the developers of algorithms embed their own normative judgements into scoring systems, they also claim the universality of their products. However, anyone who has traveled to different parts of a country or internationally, would attest cultural differences find their correspondence in work relations. Different cultures prioritize different behavior at work and have variety in how workers interact between themselves.
Even within a homogeneous work environment, scoring systems still cannot capture the complexity of work, nor do they take into account the external factors or circumstances which might be impacting a worker’s ability to deliver an output or complete a task within a certain amount of time. Without appreciating the context of worker interactions and the totality of the effort which goes into creating an output, these systems prioritize quantity and quantification [63] over quality and depth of work. Data is not independent from its context. Some workers subject to productivity algorithms characterize the situation as “infuriating”, “soul crushing” and a “kick in the teeth” as the employers had failed to grasp the totality of the tasks making up their job [11]. The expectation from employers is for workers to be robot-like subjects. This approach leaves no room for differences and diversity, and no appreciation for offline work such as thinking, reading printed material, brainstorming with co-workers, mentoring other workers.
Disability discrimination: When these systems make judgements about what be considered typical or expected productivity, they can also lead to other harms for people with disabilities. Some assessments of ADA [39] suggest “If an employer adopts a faster pace-of-work standard and enforces it rigidly, it could run afoul of the ADA’s prohibition against “standards, criteria, or methods of administration... that have the effect of discrimination on the basis of disability [67].” More than half of disabilities are invisible, and are highly diverse, making them “virtually impossible to analyze at scale [68]. In addition, only 21% of employees with disabilities disclose them to their employers’ human resources departments [69]. Access to biometric or health data collected by wearables or via a worker’s social media accounts can give managers or employers additional information to infer ability or health condition of workers, leading to possible biased decisions, or spurious inferences. Even if the information did not play a role in an adverse employment decision, employers could be alleged to have discriminated due to a disability or perceived disability [40].
The technical shortcomings of the AI system, such as inaccuracy of devices, can also cause unintended harm. For example, wearables collecting health and wellness information may not be accurate in the first place [70, 71] but can be still used for work related determinations. Since scientific validity of the system and possible technical biases are not questioned, the workers can be subjected to discriminatory outcomes. Or imagine a scenario when the developer, or employer is not aware of the bias in the system. For example, assistive devices (for example screen readers) may interfere with the accuracy of data collected. Or if the scoring systems disadvantage neurodivergent people, those with slower reading speeds or those multitasking, then the outcomes might be discriminatory.
Erosion of trust: The history of worker surveillance provides ample evidence for how employers choose the easier route of surveilling workers rather than investing in establishing trust and a shared vision with their workers [72]. In many cases, employers choose the top-down, hierarchical methods to control and shape. The alternative is co-creation and determination of shared values and vision. Workers trusted with adding value and keeping themselves and the employers accountable to agreed outcomes. The absence of trust from employers leads to erosion of trust and loyalty from workers. The work-from-home arrangements that emerged as a result of the COVID-19 pandemic created a panic environment for many employers. A Harvard Business Review article highlights the “negative spiral in which manager mistrust leads to micromanagement, which then leads to drops in worker motivation, further impairing productivity” and this spiral became deeper with COVID-19 pandemic [73]. A recent Microsoft report highlights that 85% of leaders say that the shift to hybrid work has made it challenging to have confidence that workers are being productive [74]. Whether it is tracking remote workers, or those operating in a large physical setting (i.e. warehouses, shops) or mobile workers (i.e. drivers, delivery workers), or those who are Quiet Quitting, use of surveillance and productivity tools breaks trust relationships in unrepairable ways [75] and can backfire to result in less productivity [76, 77].
Impact on health and safety: The increased pace-of-work and productivity expectations which leave no room for rest, thinking or corrective action leads to more workplace accidents [78, 79]. The “electronic sweatshop” requires repetitive, fast-paced work demanding constant alertness and attention to detail [80]. More repetition also leads to more severe physical injuries. Research literature shows increased stress associated with workplace performance scoring technologies [81–83]. Loss of autonomy over work, stress, and ubiquitous observation increases risk of psychological harm and mental health problems for workers [67].
Sometimes employers frame the productivity scoring systems as ‘games’. In other words, under the guise of turning work into competitive metrics, employers pitch workers against each other. Employers make the productivity metrics visible to all, potentially causing further stress on the workers. Even when such competition is used as part of a wellness program, the normative judgements of fitness and health are imposed upon workers. For example, expecting workers to meet certain fitness standards, and then making the metrics of those not fitting the ‘expectations’ (i.e. weight loss trackers) visible to everyone can be considered a form of body-shaming. The race to meet the demanded metrics, stress and the toll on physical health eventually leads to worker burnout [84]. In workplaces where one worker is easily replaceable by another without consideration to the human behind the data, and in the absence of any legal consequence, employers do not have any incentive to improve conditions.
Feedback loops and behavioral change: Algorithmic decision-making systems change the behavior of users and those who are impacted by the outcomes of these systems. They change and shape the culture and priorities of the implementing organization in many ways. By incentivizing workers to focus on a particular task rather than innovation and experimenting, “the organization sends a message to its workers simply by the tasks it chooses to monitor [85].” Productivity systems may result in unintended consequences of workers spending more time doing a particular activity, which is counted and rewarded, than achieving results. The metric becomes an end in itself. Surveillance works to discipline workers to conform to expected behavior which can be measured [64]. When worker’s autonomy and agency are reduced, the result is also a reduction in the capacity to be creative and “the ability to think or sometimes act out of the box [35].”
When workers are under surveillance and worry about their scores impacting their compensation or the future of their work, they will also naturally shift into more self-protecting behavior. Instead of collaborating with their co-workers or sharing their knowledge about more efficient ways of completing tasks, individual workers might become more private, distrusting and competitive [86]. They might also feel the need to game the system. Whether this need emerges as a reaction to the oppressive actions by employers, or whether from a need to increase one’s scores and possibly wages and benefits, gaming the system means finding ways to make it look like one is being productive, but in reality refusing to do what is expected. As a response to lack of trust from the management, workers can seek to circumvent intrusive managerial oversight [87].
Hypervigilance about continuous surveillance and datafication also demoralizes workers and takes away from other tasks that may be meaningful or necessary for long-term wellbeing. Scoring only certain kinds of activities can force the workers to make decisions quicker without having the time to delve deeper into an issue, case or condition. Some researchers even suggest, for example, gamified systems in the workplace could complicate and subvert ethical reasoning [88, 89]. For jobs which require more frequent decision-making, such as health, human or social services, such behavior change can result in catastrophic consequences for people dependent on decisions made.
Shoshana Zuboff highlights that at the workplace “invasive technologies are normalized among captive populations of employees [90].” When an individual accepts work surveillance and scoring technologies as inevitable, the result can be a normalization of similar technologies in other parts of life. The individual internalizes the scored society [91] and invasive and questionable techniques are normalized. Pasquale and Citron warn us that “the menace of scoring in the Big Data economy is the volume, velocity, and variety of information that could be fed into a score, and that these scores could become decisive [53]” in a variety of different contexts. Already a spectrum of products are in use to score an individual, ranging from when assessments for a credit, insurance, employment, education, immigration, or even criminal justice. The practices workers are forced to accept in workplaces will not stay limited to employment decisions.
The roadmap
There are several policies that can be deployed to better protect workers from growing digital surveillance and productivity scoring systems. Circling back to the existing policy and regulatory commitments of the US, we can establish a path forward.
Extend 1974 privacy act protections to labor regulations
FIPs were inspired by the Code of Fair Labor Practices and the Fair Labor Standards Act (FLSA, establishing minimum wage, overtime pay, recordkeeping, and child labor standards for workers in both the private and public sector) [8]. FIPs “set out the rights and responsibilities for the collection and use of personal data…with emphasis on actual practices or standards, as well as legal rights [92].” When Congress concluded the Privacy Act of 1974, eight principles were included to govern information about individuals maintained in federal agencies databases. Going back to the original inspiration, labor regulations should be updated. The 1974 Privacy Act’s protections should be extended to existing labor laws. Responsibilities of employers and rights of workers must be clear.
Collection limitation/Data minimization: employers must only create, collect, use, process, store, maintain, disseminate, or disclose data directly relevant and necessary to accomplish a legally authorized purpose.
Data quality: data collected must be relevant to the purposes for which they are sought to be used by employers. Accuracy, relevance, timeliness, and completeness of the data must be ensured by employers.
Purpose specification: employers must be transparent about their intended use of the data prior to collecting any data.
Use limitation: when employers collect data for the disclosed purpose, they cannot deploy the data they collect outside of the originally intended purpose of collection.
Security safeguards: employers must ensure the data they collect is safely stored.
Individual participation: Workers must have the right to receive data which has been collected about them or confirm whether this data has been collected, have the data relating to them communicated reasonably soon, and have their data erased, rectified, completed or amended if they choose so.
Openness: employers must be forthright about the ways in which they develop policies related to data collection.
Accountability: employers must have an accounting mechanism ensuring the principles enumerated above are followed.
It is worth noting here that the European Union’s General Data Protection Regulation (GDPR) Article 5 lists all FIPs other than Individual Participation (which is covered in separate forms in other GDPR articles) [93].
When S. 516 Privacy for Consumers and Workers Act of 1991 was proposed, Marc Rotenberg and Gary Marx provided separate testimonies to the lawmakers [94]. Although the proposal did not pass at the time, it is worth noting their individual additions to strengthen any future labor legislation.
Marx explained the techno-fallacies salient in worker surveillance practices, and offered:
Validity principle: need to have reasonable grounds for having confidence in the accuracy and worth of the information collected;
Redress principle: those subject to privacy invasions have adequate mechanisms for discovering and being compensated for violations; and
Safety net or equity principle: a minimum threshold of privacy is available to all.
Rotenberg strongly supported the proposed legislation, requested backdoor exclusions to be removed, and added additional safeguards to include:
Worker participation: Workers must be involved in shaping the technology impacting them. Employers wishing to collect data should seek for greater means of co-determination and ensure workers participate alongside employers in setting the terms according to which their data is used. In addition to individual participation of the worker, this could be a collective right for a group of workers to meaningfully participate in and co-decide in all matters related to how they are assessed. Such co-determination would include assessment of the impact of a possible algorithmic system, whether the system is a beneficial solution, and if yes, then decisions on relevant data, algorithm design and the governance of such systems.
Business responsibility: the personal information collected on employees is safeguarded.
Human review principle: technology should assist but not replace human judgment when important employment decisions are made.
Implementation of Blueprint for an AI bill of rights
The fact that employee surveillance systems should be designed with greater consideration of employee rights has also been emphasized in the recently released Blueprint. This document outlines a vision for the future development and use of AI systems, which respects rights, democratic values, and fundamental principles. The vision now needs implementation. The Blueprint especially recommends
“surveillance technologies be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties”, and
“continuous surveillance and monitoring should not be used in… work…where the use of such surveillance technologies is likely to limit rights, opportunities, or access.”
The Blueprint reiterates fundamental rights, current civil rights and anti-discrimination legislation. Its underlying vision should be used as a tool to support existing employment laws, labor relations regulations, and workplace safety laws. The Department of Labor and the relevant federal agencies (such as the Equal Employment Opportunity Commission, National Labor Relations Board, and Occupational Safety and Health Administration) should include implementation of the Blueprint in their strategic plans. While existing legal framework and case law can remediate for harms, EEOC can also use some other tools in its toolkit, such as Commissioner Charge or directed investigation to identify possible systemic discrimination [40, 95, 96]. As noted earlier, workplace surveillance is much more common than it was before and it is therefore important and necessary that these agencies consider this issue as one that should fall within their remit.
More enforcement from Federal Trade Commission
In 2021, the Federal Trade Commission warned companies about unfair or deceptive practices, including the sale or use of biased algorithms. The warning succinctly highlighted companies should use “AI truthfully, fairly, and equitably”, or otherwise FTC would use its enforcement powers from FTC Act, Fair Credit Reporting Act and Equal Credit Opportunity Act. The agency recommended companies should (1) limit where or how they use AI models, in light of any shortcomings, (2) make sure that AI does not discriminate on the basis of race, gender, or other protected class, (3) embrace transparency and independent assessments, (4) not exaggerate what the product can do or whether it can deliver fair or unbiased results, and (5) be honest about the source of data used in the algorithms and how the outcomes will be used.
Commercially available worker surveillance and productivity scoring products put both vendors and employers under the scope of possible FTC investigation. FTC’s warning allows companies to improve their practices and to hold themselves accountable first—“or be ready for the FTC to do it” for them [97].
Build better capacities across worker unions
Worker surveillance and productivity scoring tools can be used to oppress unionization in the first place, and then to weaken union protections. To effectively protect workers’ rights, unions need to build better internal capacity and capabilities.
To echo workers’ rights scholar Dr Colclough, “unions need a foundation of knowledge on the different types of digital technology and importantly on the instructions given to artificial intelligence, algorithmic systems [98]” and “unions need to urgently revamp their strategies and find ways to cooperate across borders … and ensure that all workers, in all forms of work, have the same social and fundamental rights [99].” For those systems where both employers and workers find mutual benefit, the design and governance of the algorithms should include workers and their representatives [100].
Unions should also use their knowledge as a tool to support both NLRB and FTC in their enforcement activities. Unions can provide information to NLRB on which employers are using surveillance technologies and therefore should be obliged to file Surveillance Reports [52]. Unions can also inform FTC on which companies are not using “AI truthfully, fairly, and equitably”.
Unions could also ask EEOC and The Office of Federal Contract Compliance Programs (part of the U.S. Department of Labor) to issue opinion letters tailored to questions on worker surveillance and productivity scoring algorithms. Such opinion letters could be used to clarify lawful practices and applications [40].
Algorithmic systems which violate fundamental human rights and human dignity should not be legitimized by principles [101] or through the use of risk management systems. They should not be used in the first place. The systems which can support the work conditions, work outcomes, safety and wellbeing of workers while benefiting the employers, on the other hand, should be designed, used and governed respecting the above safeguards.
Funding
No funding was received by either author to assist with the preparation of this manuscript. One of the authors is an unpaid Editorial Board Member in Springer AI & Ethics journal.
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Merve Hickok, Email: merve@lighthousecareerconsulting.com.
Nestor Maslej, Email: nmaslej@stanford.edu.
References
- 1.Bernhardt, A., Kresge, L., Suleiman, R.: Data and algorithms at work: the case for worker technology rights. Center for Labor Research and Education, University of California, Berkeley. https://laborcenter.berkeley.edu/data-and-algorithms-at-work/ (2021). Accessed 1 Mar 2023
- 2.United Nations: Universal Declaration of Human Rights. https://www.un.org/en/about-us/universal-declaration-of-human-rights (1948). Accessed 1 Mar 2023
- 3.OECD: AI principles: inclusive growth, sustainable development and well-being; human-centered values and fairness; transparency and explainability; robustness, security and safety; accountability. https://oecd.ai/en (2019). Accessed 1 Mar 2023
- 4.United States Department of Health, education and welfare’s seminal 1973 report records, computers and the rights of citizen. https://www.justice.gov/opcl/docs/rec-com-rights.pdf. Accessed 1 Mar 2023
- 5.The White House Office of Science and Technology: FACT SHEET: Biden-Harris administration announces key actions to advance tech accountability and protect the rights of the American Public. https://www.whitehouse.gov/ostp/news-updates/2022/10/04/fact-sheet-biden-harris-administration-announces-key-actions-to-advance-tech-accountability-and-protect-the-rights-of-the-american-public/ (2022a). Accessed 1 Mar 2023
- 6.The White House Office of Science and Technology: A Blueprint for an AI bill of rights. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf (2022b). Accessed 1 Mar 2023
- 7.Allen A, Rotenberg M. Privacy Law and Society Casebook. Saint Paul: West Academic Publishing; 2015. p. 760. [Google Scholar]
- 8.U.S. Code 29 (29 U.S.C.). https://uscode.house.gov/browse.xhtml. Accessed 1 Mar 2023
- 9.Coworker.org: Bossware and Employment Tech Database. https://home.coworker.org/worktech/ (2021). Accessed 1 Mar 2023
- 10.Top 10 VPN: Employee monitoring software demand trends 2020–22. https://www.top10vpn.com/research/covid-employee-surveillance/ (2022). Accessed 1 Mar 2023
- 11.Corbyn, Z.: ‘Bossware is coming for almost every worker’: the software you might not realize is watching you. The Guardian. https://www.theguardian.com/technology/2022/apr/27/remote-work-software-home-surveillance-computer-monitoring-pandemic (2022). Accessed 1 Mar 2023
- 12.Prodoscore. Frequently asked questions. https://www.prodoscore.com/resources/faq/. How prodoscore works. https://www.prodoscore.com/how-it-works/. Prodoscore Blog. what does employee productivity actually mean? https://www.prodoscore.com/blog/what-does-employee-productivity-actually-mean/. Accessed 1 Mar 2023
- 13.RemoteDesk Blog. Is your remote workforce working or pretending to work? https://www.remotedesk.com/blog/article/is-your-remote-workforce-working-or-pretending-to-work/. RemoteDesk. Webcam Monitoring. https://www.remotedesk.com/solutions/webcam-monitoring. Accessed 1 Mar 2023
- 14.Ajunwa, I., Crawford, K., Schultz, J.: Limitless worker surveillance, 105 Cal. L. Rev. 735. https://ssrn.com/abstract=2746211 (2017). Accessed 1 Mar 2023
- 15.Kantor, J.: The rise of the worker productivity score. New York Times. https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html (2022). Accessed 1 Mar 2023
- 16.ExpressVPN: Remote workforce survey. https://www.expressvpn.com/blog/expressvpn-survey-surveillance-on-the-remote-workforce/#ethics (2021). Accessed 1 Mar 2023
- 17.Gartner Insights: The right way to monitor your employee productivity. https://www.gartner.com/en/articles/the-right-way-to-monitor-your-employee-productivity (2022). Accessed 1 Mar 2023
- 18.Ziegler, B.: Should companies track workers with monitoring technology? The Wall Street Journal. https://www.wsj.com/articles/companies-track-workers-technology-11660935634 (2022). Accessed 1 Mar 2023
- 19.Zety: How do employees view workplace surveillance? https://zety.com/blog/workplace-surveillance (2022). Accessed 1 Mar 2023
- 20.Migliano, S.: Employee monitoring software demand trends 2020–22. https://www.top10vpn.com/research/covid-employee-surveillance (2022). Accessed 1 Mar 2023
- 21.Digital: 6 in 10 employers require monitoring software for remote workers. Digital.https://digital.com/6-in-10-employers-require-monitoring-software-for-remote-workers/ (2022). Accessed 1 Mar 2023
- 22.Sellers C. The market revolution. Oxford: Oxford University Press; 1991. [Google Scholar]
- 23.Snyder B. The Disrupted Workplace: Time and the Moral Order of Flexible Capitalism. Oxford: Oxford University Press; 2016. [Google Scholar]
- 24.Winslow Taylor, F.: The Principles of Scientific Management. Dover Publications (1997)
- 25.Morrison, S.: Just because you’re working from home doesn’t mean your boss isn’t watching you. Vox. www.vox.com/recode/2020/4/2/21195584/coronavirus-remote-work-from-home-employee-monitoring (2020). Accessed 1 Mar 2023
- 26.Rejouis, G.M.: Why is it OK for employers to constantly surveil workers? Slate. https://slate.com/technology/2019/09/labor-day-worker-surveillance-privacy-rights.html (2019). Accessed 1 Mar 2023
- 27.Barbaro, M.: The rise of workplace surveillance: is your productivity being electronically monitored by your bosses? New York Times. https://www.nytimes.com/2022/08/24/podcasts/the-daily/workplace-surveillance-productivity-tracking.html (2022). Accessed 1 Mar 2023
- 28.Cameron, B.: Informal Sociology: A Casual Introduction to Sociological Thinking. Random House (1963)
- 29.Cyphers, B., Gullo, K.: Inside the invasive, secretive “Bossware” tracking workers. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2020/06/inside-invasive-secretive-bossware-tracking-workers (2020). Accessed 1 Mar 2023
- 30.Weber, J.: Should companies monitor their employees' social media? Wall Street Journal. https://www.wsj.com/articles/should-companies-monitor-their-employees-social-media-1399648685 (2014). Accessed 1 Mar 2023
- 31.Ajunwa, Ifeoma: Algorithms at work: productivity monitoring applications and wearable technology as the new data-centric research agenda for employment and labor law 63 St. Louis U. L.J. 21 (2019). https://ssrn.com/abstract=3247286 (2018). Accessed 1 Mar 2023
- 32.Wolf, C., Polonetsky, J., Finch, K.: A practical privacy paradigm for wearables, future if privacy forum. 1, 4 https://fpf.org/wp-content/uploads/FPF-principles-for-wearables-Jan-2015 (2015)
- 33.Ennis, D.: Barclays faces $1.1B fine over alleged monitoring of employees. Banking Dive. https://www.bankingdive.com/news/barclays-fine-ICO-monitoring-employees/583231/ (2020). Accessed 1 Mar 2023
- 34.Stupp, C.: Monitoring of employees faces scrutiny in Europe. The Wall Street Journal. https://www.wsj.com/articles/monitoring-of-employees-faces-scrutiny-in-europe-11611138602 (2021). Accessed 1 Mar 2023
- 35.Salvi del Pero, A., Wyckoff, P., Vourc'h, A.: Using Artificial Intelligence in the workplace: What are the main ethical risks?. OECD social, employment and migration working papers, No. 273, OECD Publishing, Paris. 10.1787/840a2d9f-en (2022)
- 36.BBC: H&M fined for breaking GDPR over employee surveillance. https://www.bbc.com/news/technology-54418936 (2020). Accessed 1 Mar 2023
- 37.Feingold, S.: Could a webcam-on policy violate human rights? A Dutch court thinks so. World Economic Forum. https://www.weforum.org/agenda/2022/10/could-a-webcam-on-policy-violate-human-rights-a-dutch-court-thinks-so/ (2022). Accessed 1 Mar 2023
- 38.U.S. Code (42). 42 U.S.C. 2000e–2. https://uscode.house.gov/browse.xhtml. Accessed 1 Mar 2023
- 39.U.S. Code (42). 42 U.S.C. §§ 12101–213. https://uscode.house.gov/browse.xhtml. Accessed 1 Mar 2023
- 40.Sonderling, K.E., Kelley, B.J., Casimir, L.: The promise and the peril: artificial intelligence and employment discrimination. Univ. Miami Law Rev. 77, 1–87 (2022)
- 41.Manokha I. Surveillance, panopticism, and self-discipline in the digital age. Surveill. Soc. 2018 doi: 10.24908/ss.v16i2.8346. [DOI] [Google Scholar]
- 42.Manokha I. The implications of digital employee monitoring and people analytics for power relations in the workplace. Surveill. Soc. 2020 doi: 10.24908/ss.v18i4.13776. [DOI] [Google Scholar]
- 43.Sheng, E.: Employee privacy in the US is at stake as corporate surveillance technology monitors workers’ every move. CNBC. https://www.cnbc.com/2019/04/15/employee-privacy-is-at-stake-as-surveillance-tech-monitors-workers.html (2019). Accessed 1 Mar 2023
- 44.Corbett, J.: Walmart patents "Big Brother-Style" surveillance technology to eavesdrop on workers' conversations. Common Dreams. https://www.commondreams.org/news/2018/07/12/walmart-patents-big-brother-style-surveillance-technology-eavesdrop-workers (2018). Accessed 1 Mar 2023
- 45.Negrón, W.: Little Tech is Coming for you. Coworker.org. https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf (2021). Accessed 1 Mar 2023
- 46.Hanley, D.A., Hubbard, S.: Eyes Everywhere: Amazon's surveillance infrastructure and revitalizing worker power. Open Markets Institute. https://www.openmarketsinstitute.org/publications/eyes-everywhere-amazons-surveillance-infrastructure-and-revitalizing-worker-power (2020). Accessed 1 Mar 2023
- 47.Kelly, K.: The Pinkertons have a long, dark history of targeting workers. Teen Vogue. https://www.teenvogue.com/story/who-were-the-pinkertons (2020). Accessed 1 Mar 2023
- 48.U.S. Congress, Office of Technology Assessment . The Electronic Supervisor: New Technology, New Tensions, OTA-CIT-333. Washington, DC: U.S. Government Printing Office; 1987. [Google Scholar]
- 49.Adler-Bell, S., Miller, M.: “Datafication of employment,” Report, Century Foundation. https://production-tcf.imgix.net/app/uploads/2018/12/03160631/the-datafication-of-employment.pdf (2019). Accessed 1 Mar 2023
- 50.Cater, L.: Spain approved a law protecting delivery workers. Here’s what you need to know. Politico. https://www.politico.eu/article/spain-approved-a-law-protecting-delivery-workers-heres-what-you-need-to-know (2021). Accessed 1 Mar 2023
- 51.Freund, J.: How we’re ramping up our enforcement of surveillance reporting. U.S. Department of Labor Blog. https://blog.dol.gov/2022/09/15/how-were-ramping-up-our-enforcement-of-surveillance-reporting (2022). Accessed 1 Mar 2023
- 52.National Labor Relations Board: NLRB general counsel issues memo on unlawful electronic surveillance and automated management practices. https://www.nlrb.gov/news-outreach/news-story/nlrb-general-counsel-issues-memo-on-unlawful-electronic-surveillance-and (2022). Accessed 1 Mar 2023
- 53.Pasquale, F.A., Citron, D.K.: Response and rejoinder, promoting innovation while preventing discrimination: policy goals for the scored society. 89 Washington Law Review. 1413. https://digitalcommons.law.uw.edu/wlr/vol89/iss4/11 (2014b). Accessed 1 Mar 2023
- 54.Colclough, C.: Towards workers’ data collectives. https://projects.itforchange.net/digital-new-deal/2020/10/22/towards-workers-data-collectives/ (2020). Accessed 1 Mar 2023
- 55.Vogell, H.: Rent going up? One company’s algorithm could be why. ProPublica. https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent (2022). Accessed 1 Mar 2023
- 56.Argyle: https://argyle.com/ and https://argyle.com/verify/employee. Accessed 1 Mar 2023
- 57.Tech Company News Editorial: Interview with the founders of argyle—powering the infrastructure for universal workforce data access. https://www.techcompanynews.com/interview-with-the-founders-of-argyle-powering-the-infrastructure-for-universal-workforce-data-access/ (2019). Accessed 1 Mar 2023
- 58.Hickok M. Public procurement of artificial intelligence systems: new risks and future proofing. AI Soc. 2022 doi: 10.1007/s00146-022-01572-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.McIntyre, N., Bradbury, R.: An offshore workforce is training Amazon’s warehouse-monitoring algorithms. The Verge. https://www.theverge.com/2022/11/21/23466219/amazon-warehouse-surveillance-camera-offshore-workers-india-costa-rica (2022). Accessed 1 Mar 2023
- 60.Nguyen, A.: The constant boss: work under digital surveillance. Data and Society Research Institute. https://datasociety.net/wp-content/uploads/2021/05/The_Constant_Boss.pdf (2021). Accessed 1 Mar 2023
- 61.Mateescu, A., Nguyen, A.: Explainer: algorithmic management in the workplace. Data and Society Research Institute. https://datasociety.net/wp-content/uploads/2019/02/DS_Algorithmic_Management_Explainer.pdf (2019). Accessed 1 Mar 2023
- 62.Gamble, J.: The inequalities of workplace surveillance. The Nation. https://www.thenation.com/article/archive/worker-surveillance-big-data/ (2019). Accessed 1 Mar 2023
- 63.Crawford, K., Schultz, J.: Big data and due process: toward a framework to redress predictive privacy harms. Boston Coll. Law Rev. 55, 93–128 (2013).
- 64.Gandy OH., Jr . The Panoptic Sort: A Political Economy of Personal Information. 2. New York: Oxford University Press; 2021. [Google Scholar]
- 65.Hickok, M., Dorsey, C., O'Brien, T. Baur, D., Ingram, K., Chauhan, C., Gamundani, A.: Case study: the distilling of a biased algorithmic decision system through a business lens.10.2139/ssrn.4019672 (2022)
- 66.Munger, C.: Tweet by @PoorCharlieBot. twitter.com/PoorCharlieBot/status/1529400245146333185 (2022)
- 67.Scherer, M., Brown, L.X.Z.: Warning: Bossware may be hazardous to your health. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2021/07/2021-07-29-Warning-Bossware-May-Be-Hazardous-To-Your-Health-Final.pdf (2021). Accessed 1 Mar 2023
- 68.Partnership on Employment & Accessible Technology (PEAT): AI and disability inclusion toolkit. Risks of bias and discrimination in AI hiring tools. https://www.peatworks.org/ai-disability-inclusion-toolkit/risks-of-bias-and-discrimination-in-ai-hiring-tools/ (2022). Accessed 1 Mar 2023
- 69.Coqual (formerly Center for Talent Innovation): Disabilities and Inclusion study. https://coqual.org/wp-content/uploads/2021/04/Disabilities-and-Inclusion-Press-Release-Updated.pdf (2021). Accessed 1 Mar 2023
- 70.Brickman v. Fitbit, Inc. Casetext. https://casetext.com/case/brickman-v-fitbit-inc. Accessed 1 Mar 2023
- 71.McLellan v. Fitbit, Inc. Casetext. https://casetext.com/case/mclellan-v-fitbit-inc. Accessed 1 Mar 2023
- 72.Fonseca, E.: Worker surveillance is on the rise, and has its roots in centuries of racism. TruthOut. https://truthout.org/articles/worker-surveillance-is-on-the-rise-and-has-its-roots-in-centuries-of-racism/ (2020). Accessed 1 Mar 2023
- 73.Parker, S.K., Knight, C., Keller, A.: Remote managers are having trust issues. Harvard Business Review. https://hbr.org/2020/07/remote-managers-are-having-trust-issues (2020). Accessed 1 Mar 2023
- 74.Microsoft. September 22, 2022. Work Trend Index Special Report. https://www.microsoft.com/en-us/worklab/work-trend-index/hybrid-work-is-just-work. Accessed 1 Mar 2023
- 75.Hickok, M.: AI surveillance is not a solution for quiet quitting. Interfaces. University of Minnesota, Charles Babbage Institute. https://cse.umn.edu/cbi/interfaces (2022). Accessed 1 Mar 2023
- 76.Mims, C.: More bosses are spying on quiet quitters. It could backfire. Wall Street Journal. https://www.wsj.com/articles/more-bosses-are-spying-on-quiet-quitters-it-could-backfire-11663387216 (2022). Accessed 1 Mar 2023
- 77.Tomczak, D.: Your boss is watching you. Is that OK? American Psychological Association podcast. https://www.apa.org/news/podcasts/speaking-of-psychology/workplace-surveillance (2019). Accessed 1 Mar 2023
- 78.Human Impact Partners: The Public Health Crisis Hidden in Amazon Warehouses. Oakland, CA (2021)
- 79.Evans, W.: How Amazon hid its safety crisis. Reveal News. https://revealnews.org/article/how-amazon-hid-its-safety-crisis/ (2020). Accessed 1 Mar 2023
- 80.Garson B. The Electronic Sweatshop: How Computers are Transforming the Office of the Future into the Factory of the Past. New York: Penguin Books; 1989. [Google Scholar]
- 81.Aiello JR, Kolb KJ. Electronic performance monitoring and social context: impact on productivity and stress. J. Appl. Psychol. 1995;80(3):339–353. doi: 10.1037/0021-9010.80.3.339. [DOI] [PubMed] [Google Scholar]
- 82.International Labor Office: Psychosocial risks, stress and violence in the world of work. Int. J. Labour Res. 6(1–2). https://www.ilo.org/wcmsp5/groups/public/---ed_dialogue/---actrav/documents/publication/wcms_551796.pdf (2016). Accessed 1 Mar 2023
- 83.Furnham A, Swami V. An investigation of attitudes toward surveillance at work and its correlates. Psychology. 2015;6:1668–1675. doi: 10.4236/psych.2015.613163. [DOI] [Google Scholar]
- 84.Dzieza, J.: How hard will the robots make us work? The Verge. https://www.theverge.com/2020/2/27/21155254/automation-robots-unemployment-jobs-vs-human-google-amazon (2020). Accessed 1 Mar 2023
- 85.Ball K. Workplace surveillance: an overview. Labor Hist. 2010;51(1):87–106. doi: 10.1080/00236561003654776. [DOI] [Google Scholar]
- 86.Rosenblat, A., Kneese, T., Boyd, D.: Workplace Surveillance. Data and Society Research Institute. https://www.datasociety.net/pubs/fow/WorkplaceSurveillance.pdf (2014). Accessed 1 Mar 2023
- 87.Lodovici, S., et al.: The impact of teleworking and digital work on workers and society. Publication for the committee on Employment and Social Affairs, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662904/IPOL_STU(2021)662904_EN.pdf (2021). Accessed 1 Mar 2023
- 88.Kim T, Werbach K. More than just a game: ethical issues in gamification. Ethics Inf. Technol. 2016;18(2):157–173. doi: 10.1007/s10676-016-9401-5. [DOI] [Google Scholar]
- 89.Gabrielle, V.: The dark side of gamifying work. The Fast Company. https://www.fastcompany.com/90260703/the-dark-side-of-gamifying-work (2018). Accessed 1 Mar 2023
- 90.Zuboff, S.: The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public affairs, New York (2019)
- 91.Citron, D.K., Pasquale, F.A.: The scored society: due process for automated predictions. Washington Law Review, Vol. 89. https://ssrn.com/abstract=2376209 (2014). Accessed 1 Mar 2023
- 92.Allen A, Rotenberg M. Privacy Law and Society Casebook. Saint Paul: West Academic Publishing; 2015. [Google Scholar]
- 93.European Union: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52012PC0011 (2016). Accessed 1 Mar 2023
- 94.U.S. Senate, Subcommittee on Employment and Productivity, of the Committee on Labor and Human Resources: Marc Rotenberg and Gary T. Marx testimony for Proposed S. 516 The Privacy for Consumers and Workers Act of 1991. http://web.mit.edu/gtmarx/www/labor_hr_testimony.pdf (1991). Accessed 1 Mar 2023
- 95.Hickok, M.: State of AI policy and regulations in employment decisions. Medium. https://medium.com/@MerveHickok/state-of-ai-policy-and-regulations-in-employment-decisions-88b1a546a7bf (2022). Accessed 1 Mar 2023
- 96.Center for AI and Digital Policy: Comments regarding the US Equal Employment Opportunity Commission (EEOC) Draft Strategic Enforcement Plan Docket number: EEOC-2022-0006. https://www.regulations.gov/comment/EEOC-2022-0006-0022 (2023). Accessed 1 Mar 2023
- 97.Jillson, E.: Aiming for truth, fairness, and equity in your company’s use of AI Federal Trade Commission business blog. https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai (2021). Accessed 1 Mar 2023
- 98.Colclough, C.J.: Digitalisation: a union action guide for public services, work and workers. Public Services International. https://publicservices.international/resources/digital-publication/digitalization-br-a-union-action-guide-for-public-services-work-and-workers?id=11767&lang=en (2021). Accessed 1 Mar 2023
- 99.Colclough, C.J.: Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table. Digital Work in the Planetary Market. The MIT Press–International Development Research Centre Series, Cambridge (2022)
- 100.Colclough, C.J.: Union brief—G7 digital policy priorities 2022. https://www.thewhynotlab.com/post/reminding-the-g7-workers-rights-are-human-rights (2022). Accessed 1 Mar 2023
- 101.Hickok M. Lessons learned from AI ethics principles for future actions. AI Ethics. 2021;1:41–47. doi: 10.1007/s43681-020-00008-1. [DOI] [Google Scholar]
- 102.Kresge, L.: Data and algorithms in the workplace: a primer on new technologies. Center for Labor Research and Education University of California, Berkeley. https://laborcenter.berkeley.edu/wp-content/uploads/2020/12/Working-Paper-Data-and-Algorithms-in-the-Workplace-A-Primer-on-New-Technologies-FINAL.pdf (2020). Accessed 1 Mar 2023
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.