Skip to main content
The European Journal of Public Health logoLink to The European Journal of Public Health
. 2019 Nov 18;29(Suppl 3):3–6. doi: 10.1093/eurpub/ckz160

The second information revolution: digitalization brings opportunities and concerns for public health

Martin McKee 1,, May C I van Schalkwyk 1, David Stuckler 2
PMCID: PMC6859519  PMID: 31738440

Abstract

The spread of the written word, facilitated by the introduction of the printing press, was an information revolution with profound implications for European society. Now, a second information revolution is underway, a digital transformation that is shaping the way Europeans live and interact with each other and the world around them. We are confronted with an unprecedented expansion in ways to share and access information and experiences, to express ourselves and communicate. Yet while these changes have undoubtedly provided many benefits for health, from information sharing to improved surveillance and diagnostics, they also open up many potential threats. These come in many forms. Here we review some the pressing issues of concern; discrimination; breaches of privacy; iatrogenesis; disinformation and misinformation or ‘fake news’ and cyber-attacks. These have the potential to impact negatively on the health and wellbeing of individuals as well as entire communities and nations. We call for a concerted European response to maximize the benefits of the digital revolution while minimizing the harms, arguably one of the greatest challenges facing the public health community today.

An information revolution

Not for the first time, Europe is in the midst of an information revolution. Now it is digitalization. Before it was printing. The introduction of a printing press with moveable type by Johannes Gutenberg, around 1450, would have profound consequences for society. In a classic text, Briggs and Burke describe how the spread of print media challenged existing structures of power and hierarchy and encouraged a process of enquiry that would give rise to diversity of views.1 These developments fuelled the Enlightenment and the growth of knowledge that characterized it. They empowered people, allowing them to communicate radical ideas more effectively and to question accepted wisdom. But on occasions, the printed word was used for less benevolent purposes. In one of the earliest examples of what we might now call fake news, English pamphleteers printed scurrilous allegations about Marie Antoinette, primarily for the purposes of blackmail, in which they succeeded by extracting large sums from Louis XVI, while also contributing to the bloodletting that accompanied the French Revolution.2

Nowadays, print media continues to play these roles, but it is being increasingly replaced by digital media, in what has been described as the second information revolution.3 In some respects, this is just another way of disseminating words and pictures. In others, it is radically different. First, the advent of Web 2.0 allows anyone to publish content. This is no longer the preserve of those who own the printing presses. Second, and as a result, there is a massive expansion of the nature and quantity of information that is published. Once, people would confide their innermost secrets to their diary, to be shared, if at all, with a few close friends or after their death. Now, many aspects of their lives are recorded, in real time and in intimate detail, to the world. This includes data on their movements, tracked by geolocation services embedded in their mobile phones, their physiological parameters, collected by a growing range of wearable technologies, their interests and ideas, captured by tracking technology embedded in search engines and much else. Third, while a more open approach to publishing has brought huge benefits, exemplified by the knowledge contained within Wikipedia, it has also created many opportunities for those promoting disinformation. Finally, those who might once have read a printed book in private, for example, if it contained seditious material, risk losing that privacy, often within a matter of seconds. The content that is shared may also have implications for that individual when moving between countries with different approaches to legislating or censoring of social media platforms.4 Some risk being targeted for their presumed political views, whether by a repressive state or by an algorithm that uses their interests to direct information to them, reinforcing their existing beliefs and polarizing attitudes within society.

Benefits of digitalization

These developments have important implications for health. Many are beneficial. Thus, the growth in digital information can contribute to generating and sharing of knowledge. Those with rare diseases can come together over vast distances, creating a community that can share experiences and insights. Patients with chronic conditions can become much better informed about their disease, including ways of adapting to its impact on them.

Patients (and the public) can also contribute their data to improved understanding of disease, including insights into aetiology, diagnosis and possible avenues for treatment. For example, monitoring of Internet traffic is being used to provide early warning of disease outbreaks.5 When linked to other data, such as patterns of mobility obtained from mobile phone records or meteorological data, it can even enhance models used to predict outbreaks.6,7 Clinical data, drawn from large populations, can be used by artificial intelligence applications to discern patterns, thereby improving prognostic tools.8 Clinical data can also be used to feed into machine learning applications that allow automation of some diagnostic processes, especially those involving image processing, in areas such as pathology, radiology, dermatology and ophthalmology.9 New forms of wearable technology and other mobile devices may offer opportunities for disease prevention, although so far rigorous evidence of their effectiveness is lacking.10

Analysis of internet searches, which offer pointers to issues that are concerning people in real time, is increasingly being used to provide clues to emerging health trends long before they become apparent in traditional data sources. Thus, internet searches for suicide-related terms, which can be obtained almost instantaneously, correlate with actual suicides in young people, for which data may only become available after several years.11 The uses of Internet search data are limited only by the imagination of researchers.12 For example, a concern about a possible drug interaction was supported by the finding that people in widely scattered locations had been searching in combination for the products involved.13 Searches can also reveal things that individuals might be reluctant to disclose in research using more traditional methods. An example is the identification of an association between racism in the USA, captured by searches for the ‘N-word’ and black/white disparities in mortality.14 Another example is a study that showed how discussion of Adderall, a stimulant used widely by university students in the USA, peaked during exam periods and was concentrated in communities hosting leading universities.15

Monitoring of social media can offer insights into how people understand health related conditions, using content and sentiment analysis to highlight how people understand key issues, thereby informing the development of health promotion material, as well as identifying and responding to key influencers,16 including those celebrities who are paid to promote health damaging products.17

On a more practical level, advances in digital technology provide new ways of interacting with health services, including direct bookings of appointments, ordering repeat prescriptions and diagnostic kits e.g. for HIV self-testing and in some cases remote consultations or real-time surgical advice using packages such as Skype. There is also growing recognition of the potential benefits and risks of using digital messaging apps, such as WhatsApp, to facilitate communication within the healthcare setting as well as for enhancing medical education.18

Five concerns about digitalization

There are, however, some concerns and in some cases, threats posed by the digital revolution. One is whether health systems have the capacity, in terms of human resources and governance structures, to take advantage of the opportunities set out above, as discussed in a recent report by the UK Health Foundation.19 But beyond that, we can identify five issues that require attention: discrimination; breaches of privacy; iatrogenesis; disinformation and misinformation or ‘fake news’ and cyber-attacks. The list is far from exhaustive but serves to illustrate the scale of the challenges facing the public health community in this digital revolution.

Among the more benign is the potential for inadvertent discrimination. This can occur when an algorithm replicates human behavior that, consciously or unconsciously, discriminates on grounds of, for example, ethnicity. In one highly cited study, a computer was programmed to learn English by trawling through vast quantities of text. It learnt to associate male names with career-related terms and female names with family-related terms, European names were associated with pleasant terms and African–American names with unpleasant ones.20 Problems can also arise from use of unrepresentative data to generate the algorithms. A standard database used to develop commercial facial recognition tools in the USA underrepresents females and people with dark skin with the resulting tools perform much poorer with faces of dark skinned females.21 This can have important consequences when, for example, innocent individuals are misidentified as wanted criminals.

There is also scope for intentional discrimination.22 The US investigative journalism organization ProPublica demonstrated that they could restrict advertisements on Facebook for attractive rental properties in New York to exclude African–Americans, Jews and anyone who had shown an interest in aids for disabled people.23 ProPublica has also revealed that TurboTax manipulated Google and other search engines through the use of coding, essentially deceiving lower-income Americans into providing a payment for filing of their taxes despite being eligible to do so free of charge.24 In these ways, digitalization can widen inequalities.

Finally, there is a concern that interventions based on eHealth may be taken up most by those whose needs are least,25 while those who are unfamiliar with new technology, such as some older people, become resentful at being excluded from these advances.26 As with many information interventions, there is a risk that eHealth could widen inequalities in health between those with high and low education.27

The second area of concern is the potential for breaches of privacy. Public health researchers face formidable obstacles when conducting surveys, especially when studying sensitive issues. Yet at the same time, social media companies are gathering vast quantities of information about their users, including not only what is posted on their particular platform, but information harvested from numerous linked sources, including geolocation. Advances in machine learning now make it possible to assemble extremely detailed profiles of many aspects of the lives of those who have any significant online presence.28 The scale of this activity was revealed by investigations into the use of precisely targeted advertisements on Facebook during a number of electoral events, including the 2016 US Presidential election, the UK’s EU referendum and elections in Kenya and South Africa. This also relates to the previous point, as only those who are the intended recipients of the messages may see them, so that those seeking to counter disinformation, considered further below, may be unaware of them. These tactics have often been used to advance causes or parties that oppose public health policies.

In recent years, companies have exploited the benefits (to them) of digital transactions, assisted by the increased use of credit and loyalty cards. One notorious example, described in a detailed, although now dated review of marketing methods, was where a company sent a teenager pregnancy related material having predicted her condition from her purchases, before she had disclosed her condition to her parents.29 This is an area where progress is extremely rapid, assisted by techniques such as facial and voice recognition. For example, Amazon has patented the concept of analyzing user’s moods, facilitated by the continuous monitoring of conversations by its Alexa device, so as to allow targeted advertisements adjusted to how one is feeling.30 The increased use of methods such as these have clear implications for those researching the emerging field of corporate determinants of health.31 Indeed, the social media companies are now becoming the subjects of public health research themselves, studied by those seeking to understand the influence they and their clients, wield. Companies such as Facebook, Google and Twitter, hold an effective monopoly on their technology. While governments increasingly recognize the need to address this issue, what constitutes an optimal approach continues to be debated.32

The third concern is what has been termed ‘e-iatrogenesis’, defined as ‘patient harm caused at least in part by the application of health information technology’.33 This is not new. The advent of more sophisticated imaging techniques has led to many people undergoing complex invasive investigations for ‘anomalies’ that we now know are simply variants of normal anatomy. As noted above, machine learning and artificial intelligence hold considerable potential in the field of diagnosis, especially where it is important to be able to recognize complex patterns.34,35 However, it is essential that claims made by those promoting these approaches are evaluated, just as any other form of health technology. In particular, there is a risk of unintended consequences. For example, when skilled human observers were presented with images already annotated by computers their accuracy was reduced.36 Particular criticism has been directed at an app being promoted within the English National Health Service, with examples circulating on social media of some frankly bizarre and, in some cases potentially dangerous advice, leading to complaints to the medical device regulator.37 There is also considerable potential for companies making health-related items to design apps that promote their products, in a way that is analogous to the way that pharmaceutical companies have sought to stretch the definition of disease, in some cases manufacturing new ‘illnesses’.

The fourth concern is that what is spread on digital media can be seriously misleading. A recent UK Parliament report on ‘fake news’ identified two different types of incorrect information,38 disinformation and misinformation. Disinformation is defined as that which is intentionally designed to mislead. Vaccination has been among the most intensively targeted issues, to the extent that it was selected by the US Department of Defence for a competition to find the best way to identify ‘influence bots’.39 Much of the online content on vaccination is misleading and incorrect messages are consistently found to be liked and shared more often than those that are accurate.40,41 But where do these messages come from? One very detailed study of vaccine-related posts on Twitter provides some answers.42 It found three types of account that were especially likely to spread vaccine-related disinformation. One was Russian trolls, identified from lists compiled by US authorities that point to links with the Russian Internet Research Agency, an organization implicated in electoral interference in several countries, including the USA and UK. Using the hashtag #VaccinateUS, they disseminated pro- and anti-vaccination messages. This was consistent with their approach to other issues, such as gun control and the #BlackLivesMatter movement, with messages that differed from other sources by linking vaccines with particularly divisive topics such as race and religion. The second comprises - to be consistent with below what were termed ‘sophisticated bots’, which automatically promote particular types of content. Again they included pro- and anti-vaccine messages, often with the apparent goal of encouraging people to believe that the medical community is divided. A third group comprises ‘content polluters’, using anti-vaccine messages that encourage curiosity. Many of these are used to malware or act as clickbait, directing readers to sites that generate income. A subsequent paper by some of the same authors provides a comprehensive taxonomy of the diverse range of malicious actors on Twitter.43

Clearly it is necessary to respond to the viral spread of mis- and dis-information, but this is not always easy. For example, challenging incorrect anti-vaccine beliefs held by those who believe that the pharmaceutical industry manipulates data is not made easy by a few high profile cases where they have been shown to do so. Some responses could perversely ‘backfire’,44 paradoxically, increasing the propensity for some people to believe the false message.45,46 It may be more effective not to engage with the details but instead appeal to values. Similarly, it is important not to suggest that a belief that is actually uncommon is widely held.47 These are just a few examples of what is now a rapidly growing body of research that public health professionals will need to become familiar with.48,49

The public health community must also brace itself for online attacks, with concerted campaigns of often deeply unpleasant and personal abuse.50 Public health organizations and training bodies should therefore seek to train their staff to be effective advocates using social media while providing them with the support and guidance on how to do so safely.

The final concern considered here, and one that could become a serious threat to health at an individual level, relates to the risk that machine learning applications will be subject to adversarial attacks. This can take several forms. As these applications continuously learn from the data inputted to them, there is scope to influence the algorithms with fraudulently manipulated inputs, which can be in the form of written or audible text or images.51 These could be undertaken for many different purposes, such as attempted extortion from the owner of a diagnostic system, although most attention has been directed at the potential to circumvent systems for authorizing treatment for billing in the US health system. Thus, one recent paper showed how easy it was to manipulate the pixels in an image of a mole to change the automated assessment of whether it was likely to be benign or malignant, with the change imperceptible to even skilled observers. Another form of attack was the use of the Wannacry ransomware, which temporarily paralyzed large parts of the English National Health Service in 2017.52

Conclusions

As this brief review shows, the digital revolution provides opportunities to improve health, but also threats. We have witnessed profound expansion of opportunities to share and access information and experiences, to express ourselves and communicate with each other 24 h a day and over vast distances. This has simultaneously led to a marked increase in opportunities to manipulate and deceive, thereby rendering people’s health and wellbeing vulnerable to novel threats. Maximizing the benefits while minimizing the harms is arguably one of the greatest challenges facing the public health community. Crucially, there is a need to address the power imbalances and inequalities that determine who benefits and who is harmed. Open and transparent discussion is essential to begin addressing these issues. We hope that this paper will contribute to this dialogue and encourage robust responses, including governance structures that are appropriate for these emerging challenges, just as there are for other health issues. No one country can tackle these issues on its own. The European Commission’s Expert panel on Investing in Health has recently published a detailed report on digitalization and health, which provides recommendations on how to maximize the opportunities and minimize the harms.53 This is a good start but continued concerted action by the European institutions, drawing on the expertise and authority in health and information technology sectors, working with national and European agencies engaged in intelligence and cyber security, will be essential.

Conflicts of interest: None declared.

References

  • 1. Briggs A, Burke P. A Social History of the Media: From Gutenberg to the Internet. London: Polity, 2009. [Google Scholar]
  • 2. Burrows S. Blackmail, Scandal and Revolution: London’s French Libellistes, 1758-92. Manchester: Manchester University Press, 2006. [Google Scholar]
  • 3. Brock GW, Brock GW. The Second Information Revolution. Cambridge, MA: Harvard University Press, 2009. [Google Scholar]
  • 4. Press A. British Woman Faces Jail in Dubai for ‘Insulting’ Ex-Husband’s New Wife on Facebook: The Guardian; 2019 updated 2019-04-08. Available at: http://www.theguardian.com/world/2019/apr/08/british-woman-faces-jail-in-dubai-for-insulting-ex-husbands-new-wife-on-facebook (9 May 2019 last date accessed).
  • 5. Chan EH, Brewer TF, Madoff LC, et al. Global capacity for emerging infectious disease detection. Proc Natl Acad Sci USA 2010;107:21701–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Wesolowski A, Qureshi T, Boni MF, et al. Impact of human mobility on the emergence of dengue epidemics in Pakistan. Proc Natl Acad Sci USA 2015;112:11887–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Pastorino R, de Vito C, Migliara G, et al. Benefits and challenges of Big Data in healthcare: an overview of the European initiatives. Eur J Public Health 2019;29(suppl 3):23–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Dumas F, Bougouin W, Cariou A. Cardiac arrest: prediction models in the early phase of hospitalization. Curr Opin Crit Care 2019;25:204–10. [DOI] [PubMed] [Google Scholar]
  • 9. Ching T, Himmelstein DS, Beaulieu-Jones BK, et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interf 2018;15 Doi: 10.1098/rsif.2017.0387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Petit A, Cambon L. Exploratory study of the implications of research on the use of smart connected devices for prevention: a scoping review. BMC Public Health 2016;16:552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Arora VS, Stuckler D, McKee M. Tracking search engine queries for suicide in the United Kingdom, 2004-2013. Public Health 2016;137:147–53. [DOI] [PubMed] [Google Scholar]
  • 12. Arora VS, McKee M, Stuckler D. Google Trends: opportunities and limitations in health and health policy research. Health Policy (Amsterdam, Netherlands) 2019;123:338–41. [DOI] [PubMed] [Google Scholar]
  • 13. White RW, Tatonetti NP, Shah NH, et al. Web-scale pharmacovigilance: listening to signals from the crowd. J Am Med Inform Assoc 2013;20:404–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Chae DH, Clouston S, Hatzenbuehler ML, et al. Association between an internet-based measure of area racism and black mortality. PLoS One 2015;10:e0122963. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Hanson CL, Burton SH, Giraud-Carrier C, et al. Tweaking and tweeting: exploring Twitter for nonmedical use of a psychostimulant drug (Adderall) among college students. J Med Int Res 2013;15:e62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Sinnenberg L, Buttenheim AM, Padrez K, et al. Twitter as a tool for health research: a systematic review. Am J Public Health 2017;107:e1–e8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Kozinets R. How Social Media is Helping Big Tobacco Hook a New Generation of Smokers: The Conversation; 2019. Available at: http://theconversation.com/how-social-media-is-helping-big-tobacco-hook-a-new-generation-of-smokers-112911 (9 May 2019 last date accessed).
  • 18. Raiman L, Antbring R, Mahmood A. WhatsApp messenger as a tool to supplement medical education for medical students on clinical attachment. BMC Med Educ 2017;17:7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Bardsley M, Steventon A, Fothergill G. Untapped Potential: Investing in Health and Care Data Analytics. London: Health Foundation, 2019. [Google Scholar]
  • 20. Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science 2017;356:183–6. [DOI] [PubMed] [Google Scholar]
  • 21. Buolamwini J, Gebru T, editors. Gender shades: intersectional accuracy disparities in commercial gender classification In: Conference on Fairness, Accountability and Transparency, New York, NY, 2018. [Google Scholar]
  • 22. Brall C, Schroder-Back P, Maeckelberghe E. Ethical aspects of digital health from a justice point of view. Eur J Public Health 2019;29(Suppl 3):18–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Angwin J, Tobin A, Varner M Facebook (Still) Letting Housing Advertisers Exclude Users by Race: Pro-Publica; 2017. Available at: https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin.
  • 24. Elliott J. TurboTax Deliberately Hid Its Free File Page From Search Engines: ProPublica; 2019. Available at: https://www.propublica.org/article/turbotax-deliberately-hides-its-free-file-page-from-search-engines/amp.
  • 25. Reiners F, Sturm J, Bouw LJW, Wouters E. Sociodemographic factors influencing the use of ehealth in people with chronic diseases. Int J Environ Res Public Health 2019;16:645. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Ball C, Francis J, Huang KT, et al. The physical-digital divide: exploring the social gap between digital natives and physical natives. J Appl Gerontol 2019;38:1167–84. [DOI] [PubMed] [Google Scholar]
  • 27. Azzopardi-Muscat N, Sorensen K. Towards an equitable digital public health era: promoting equity through a health literacy perspective. Eur J Public Health 2019;29(Suppl 3):13–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Gu X, Yang H, Tang J, et al. Profiling Web users using big data. Soc Netw Anal Min 2018;8:24. [Google Scholar]
  • 29. Duhigg C. How Companies Learn Your Secrets: The New York Times; 2012. Updated 20120216. Available at: https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html (9 May 2019 last date accessed).
  • 30. Fussell S. Alexa Wants to Know How You’re Feeling Today: The Atlantic; 2018. Updated 2018-10-12. Available at: https://www.theatlantic.com/technology/archive/2018/10/alexa-emotion-detection-ai-surveillance/572884/ (9 May 2019 last date accessed).
  • 31. McKee M, Stuckler D. Revisiting the corporate and commercial determinants of health. Am J Public Health 2018;108:1167–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Fernandes C. The EU’s Plan to Rein in Facebook and Google will do Exactly the Opposite | Carlos Fernandes: The Guardian; 2019. Updated 2019-03-09. Available at: http://www.theguardian.com/commentisfree/2019/mar/09/eu-plan-facebook-google-online-copyright-law (9 May 2019 last date accessed).
  • 33. Weiner JP, Kfuri T, Chan K, Fowles JB. “e-Iatrogenesis”: the most critical unintended consequence of CPOE and other HIT. J Am Med Inform Assoc JAMIA 2007;14:387–8; discussion 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Kapoor R, Walters SP, Al-Aswad LA. The current state of artificial intelligence in ophthalmology. Survey Ophthalmol 2019;64:233–40. [DOI] [PubMed] [Google Scholar]
  • 35. Das N, Topalovic M, Janssens W. Artificial intelligence in diagnosis of obstructive lung disease: current status and future potential. Curr Opin Pulmonary Med 2018;24:117–23. [DOI] [PubMed] [Google Scholar]
  • 36. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. JAMA 2017;318:517–8. [DOI] [PubMed] [Google Scholar]
  • 37. Ram A, Neville S High-Profile Health App Under Scrutiny After Doctors’ Complaints: Financial Times; 2018. Updated 2018-07-13. Available at: https://www.ft.com/content/19dc6b7e-8529-11e8-96dd-fa565ec55929 (9 May 2019 last date accessed).
  • 38. Disinformation and ‘fake news’: Final Report. Eighth Report of Session 2017–19. London: UK Parliament. Digital Culture Media and Sport Committee, 2019. [Google Scholar]
  • 39. Subrahmanian V, Azaria A, Durst S, et al. The DARPA Twitter bot challenge. Computer 2016;49:38–46. [Google Scholar]
  • 40. Keelan J, Pavri-Garcia V, Tomlinson G, Wilson K. YouTube as a source of information on immunization: a content analysis. JAMA 2007;298:2482–4. [DOI] [PubMed] [Google Scholar]
  • 41. Donzelli G, Palomba G, Federigi I, et al. Misinformation on vaccination: a quantitative analysis of YouTube videos. Hum Vaccin Immunother 2018;14:1654–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Broniatowski DA, Jamison AM, Qi S, et al. Weaponized health communication: Twitter Bots and Russian trolls amplify the vaccine debate. Am J Public Health 2018;108:1378–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Jamison AM, Broniatowski DA, Quinn SC. Malicious actors on Twitter: a guide for public health researchers. Am J Public Health 2019;109:688–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Rossen I, Hurlstone MJ, Lawrence C. Going with the grain of cognition: applying insights from psychology to build support for childhood vaccination. Front Psychol 2016;7:1483. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Nyhan B, Reifler J, Richey S, Freed GL. Effective messages in vaccine promotion: a randomized trial. Pediatrics 2014;133:e835–42. [DOI] [PubMed] [Google Scholar]
  • 46. Skurnik I, Yoon C, Park DC, Schwarz N. How warnings about false claims become recommendations. J Consum Res 2005;31:713–24. [Google Scholar]
  • 47. Cialdini RB, Demaine LJ, Sagarin BJ, et al. Managing social norms for persuasive impact. Soc Influence 2006;1:3–15. [Google Scholar]
  • 48. Merchant RM, Asch DA. Protecting the value of medical science in the age of social media and “Fake News”. JAMA 2018;320:2415. [DOI] [PubMed] [Google Scholar]
  • 49. Chou WS, Oh A, Klein W. Addressing health-related misinformation on social media. JAMA 2018;320:2417. [DOI] [PubMed] [Google Scholar]
  • 50. McKee M. Social media attacks on public health advocates. BMJ 2014;349:g6006. [DOI] [PubMed] [Google Scholar]
  • 51. Biggio B, Roli F. Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition 2018;84:317–31. [Google Scholar]
  • 52. Ehrenfeld JM. WannaCry, cybersecurity and health information technology: a time to act. J Med Syst 2017;41:104. [DOI] [PubMed] [Google Scholar]
  • 53.Expert Panel on effective ways of investing in health. Assessing the Impact of Digital Transformation of Health Services. Brussels: European Commission, 2019. [Google Scholar]

Articles from The European Journal of Public Health are provided here courtesy of Oxford University Press

RESOURCES