In recent years, there has been growing recognition of “corporate surveillance” practices, where companies collect, sell, analyze, and combine personal data on a large scale (Cyphers, 2019). In addition to personal data that directly contains behavioral and health information, companies can apply analytics to a range of consumer data, like grocery purchases and location information, in order to generate health inferences, such as risk of depression or diabetes (Allen, 2018). Much of the conversation regarding the ethics of corporate surveillance of health-related data understandably focuses on data protection and privacy. However, there are instances where consumer data could potentially be of value for public health purposes or to identify individuals with heightened health risks. Levinson et al. discussed factors for assessing the ethical and legal duties of public health agencies engaged in surveillance of a high-risk behavior raises, such as the harm principle and the liberty-limiting continuum. As situations arise in which the data collected by corporations can be used to address public health concerns, these factors are relevant to considering the potential obligations of companies and formulating guidance and frameworks regarding whether and how consumer data may be used ethically to address urgent public health issues.
In early 2019, Facebook debuted an AI tool to identify posts on its site indicating that a person was at high risk of suicide. Development of the tool followed a number of incidents in which Facebook users posted suicide notes or even livestreamed suicide attempts (Eliyatt, 2017). Critics of Facebook’s suicide prevention tool have pointed out areas of ethical concern, including that the tool could be considered medical research yet does not provide protections generally required of research, such as consent or accountability (Thielking, 2019). For example, Facebook’s suicide prevention tool does not allow users to opt out, and information regarding the accuracy of the algorithm or the outcomes of Facebook’s process for interventions are not available for outside evaluation (Marks, 2019).
At the same time, some of the considerations raised by Levinson et al. regarding public health obligations to intervene are relevant to Facebook’s situation. Suicide prevention is a significant public health issue. The history of Facebook posts declaring suicidal intent, as well as the availability of relevant user data and technical expertise, could arguably point towards a need to use Facebook’s data to take steps to identify and address suicide risk. Facebook will not be the only organization to face these kinds of questions regarding personal data that could be used to identify and address health information involving a high risk of harm. Not only do health and other consumer apps collect large amounts of personal data, but universities and other educational potentially have access to student data through devices such as university key cards that can be used to produce behavioral inferences (Castle, 2018; Paul, 2020). There is a need for further evaluation and guidance regarding whether, in cases of significant public health concern, companies or institutions could have obligations to utilize people’s personal data to prevent harm. The Facebook suicide prevention tool also highlights the need for such interventions to be conducted in a transparent and scientifically rigorous manner that gives sufficient consideration to issues of consent, involving stakeholders in identifying suitable processes for handling risk once it is evaluated and third-party evaluation of the algorithm and outcomes.
The need to develop effective contact tracing efforts for COVID-19 also has raised questions regarding the ethical use of consumer data for public health purposes. For example, Unacast and Google used smartphone location data to create publicly available sites for tracking how different regions were complying with social distancing provisions (Fowler, 2020). Apple and Google created platforms for digital contact tracing apps that were meant to preserve individual privacy through a decentralized system for collection of de-identified data (Greenberg, 2020). The focus on data protection for these digital contact tracing projects, however, meant that insufficient attention was given to public health frameworks for evaluating trade-offs between privacy and other ethical values during a public health emergency (Martinez-Martin et al, 2020).
In the current pandemic, the potential public health benefit of consumer data could outweigh the privacy concerns, although measures should still be in place to minimize potential harms from the data use. South Korea used data gathered from corporate and government sources for a contact tracing program that has largely been considered successful at limiting the spread of Covid-19, and strict transparency and limitations on the collected personal data were key aspects of the program’s success (Kluth, 2020). There are certainly valid concerns regarding the potential harms from corporate and government surveillance. Both Facebook and Google both faced criticism for previous incidents in which consumer and health data was handled inappropriately (Newcomb, 2019; Wakabayashi, 2020). Regulation to prevent companies from improper use of consumer data and sunset provisions for the public health use of consumer data are some ways that privacy concerns could be addressed (Mello & Wang, 2020).
These examples of digital contact tracing programs and Facebook’s AI tool illustrate the need to develop further guidance for addressing situations where companies may arguably have obligations to utilize people’s personal data to address public health issues or when there is an urgent public health need that could be addressed through the use of consumer data. Laying the foundation for collaboration between companies holding relevant consumer data and public health agencies will be important. Beyond regulation and guidance targeting privacy concerns, there is a need to formulate recommendations and standards regarding transparency and accountability when consumer data is used for health interventions, as well as processes for evaluation of the safety and effectiveness of resulting interventions.
References
- Allen Marshall. “Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates.” ProPublica, July 17, 2018. https://www.propublica.org/article/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates?token=Gg58888u2U5db3W3CsuKrD0LD_VQJReQ. [Google Scholar]
- Castle Lauren. “How a UA Professor Uses Student-ID Card Data to Help Predict Dropouts.” The Arizona Republic; Accessed July 28, 2020 https://www.azcentral.com/story/news/local/arizona-education/2018/03/26/university-arizona-predict-dropouts-student-id-card-data/420348002/. [Google Scholar]
- Cyphers Bennett. “Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance.” Electronic Frontier Foundation, December 2, 2019. https://www.eff.org/wp/behind-the-one-way-mirror. [Google Scholar]
- Ellyatt Holly. “Facebook Turns to A.I. to Help Prevent Suicides.” CNBC, November 28, 2017, sec. Technology. https://www.cnbc.com/2017/11/28/facebook-turns-to-a-i-to-help-prevent-suicides.html. [Google Scholar]
- Fowler Geoffrey A. “Perspective | Smartphone Data Reveal Which Americans Are Social Distancing (and Not).” Washington Post, March 24, 2020. https://www.washingtonpost.com/technology/2020/03/24/social-distancing-maps-cellphone-location/. [Google Scholar]
- Greenberg Andy. “Apple and Google Respond to Covid-19 Contact Tracing Concerns.” Wired, April 17, 2020. https://www.wired.com/story/apple-google-contact-tracing-strengths-weaknesses/. [Google Scholar]
- Kluth Andrew. “If We Must Build a Surveillance State, Let’s Do It Properly.” Bloomberg.Com, April 21, 2020. https://www.bloomberg.com/opinion/articles/2020-04-22/taiwan-offers-the-best-model-for-coronavirus-data-tracking. [Google Scholar]
- Marks Mason. “Artificial Intelligence Based Suicide Prediction” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, January 29, 2019. https://papers.ssrn.com/abstract=3324874. [Google Scholar]
- Martinez-Martin Nicole, Wieten Sarah, Magnus David, and Cho Mildred K.. 2020. “Digital Contact Tracing, Privacy, and Public Health.” Hastings Center Report 50 (30): 43–46. 10.1002/hast.1131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mello By Michelle M., and Wang C. Jason. 2020. “Ethics and Governance for Digital Disease Surveillance.” Science 368(6494): 951–954. 10.1126/science.abb9045. [DOI] [PubMed] [Google Scholar]
- Newcomb Alyssa. “A Timeline of Facebook’s Privacy Issues — and Its Responses.” NBC News, March 24, 2018. https://www.nbcnews.com/tech/social-media/timeline-facebook-s-privacy-issues-its-responses-n859651. [Google Scholar]
- Paul Deanna. “Colleges Want Freshmen to Use Mental Health Apps. But Are They Risking Students’ Privacy?” Washington Post, January 2, 2020. https://www.washingtonpost.com/technology/2019/12/27/colleges-want-freshmen-use-mental-health-apps-are-they-risking-students-privacy/. [Google Scholar]
- Thielking Megan. “Experts Raise Questions about Facebook’s Suicide Prevention Tools.” STAT (blog), February 11, 2019. https://www.statnews.com/2019/02/11/facebook-suicide-prevention-tools-ethics-privacy/. [Google Scholar]
- Wakabayashi Daisuke. “Google and the University of Chicago Are Sued Over Data Sharing.” New York Times, June 26, 2019. https://www.nytimes.com/2019/06/26/technology/google-university-chicago-data-sharing-lawsuit.html. [Google Scholar]
