Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 22.
Published in final edited form as: Sci Commun. 2023 Apr 4;45(4):539–554. doi: 10.1177/10755470231162634

Science Communication as a Collective Intelligence Endeavor: A Manifesto and Examples for Implementation

Dawn Holford 1,, Angelo Fasce 2, Katy Tapper 3, Miso Demko 4, Stephan Lewandowsky 1, Ulrike Hahn 5, Christoph M Abels 6, Ahmed Al-Rawi 7, Sameer Alladin 1, T Sonia Boender 8, Hendrik Bruns 9, Helen Fischer 10, Christian Gilde 11, Paul H P Hanel 12, Stefan M Herzog 13, Astrid Kause 14, Sune Lehmann 15, Matthew S Nurse 16, Caroline Orr 17, Niccolò Pescetelli 18, Maria Petrescu 19, Sunita Sah 20, Philipp Schmid 21, Miroslav Sirota 12, Marlene Wulf 13
PMCID: PMC7615322  EMSID: EMS190993  PMID: 37994373

Abstract

Effective science communication is challenging when scientific messages are informed by a continually updating evidence base and must often compete against misinformation. We argue that we need a new program of science communication as collective intelligence—a collaborative approach, supported by technology. This would have four key advantages over the typical model where scientists communicate as individuals: scientific messages would be informed by (a) a wider base of aggregated knowledge, (b) contributions from a diverse scientific community, (c) participatory input from stakeholders, and (d) better responsiveness to ongoing changes in the state of knowledge.

Keywords: science communication, collective intelligence, epistemic diversity, knowledge aggregation, participatory input, knowledge updating


Many of the pressing challenges that societies face today, from climate change to global pandemics, require decisions informed by the best available scientific evidence. Ideally, citizens should have access to good quality scientific knowledge that they can trust. However, citizens may have difficulties accessing scientific information and grasping the technical terms used. Some of the difficulty can be mitigated by a better style of science communication, for example, using clearer and jargon-free language (Hanel & Mehler, 2019; Martínez & Mammola, 2021), more intuitive presentation formats (Sirota & Juanchich, 2019), effective graphics (Harold et al., 2016), and narratives that resonate with people (Freling et al., 2020). Similarly, there is a case for supporting people’s competencies to critically engage with information (Brodsky et al., 2021; Hertwig & Grüne-Yanoff, 2017). While these aspects are important, it is also essential to consider the content of these messages: what is the best evidence and who is involved in generating it. Scientific knowledge is continually updating, and new evidence now emerges rapidly, with gaps, uncertainties, and ambiguities in the data and its interpretation. A new program of science communication is needed that can address these complexities and derive clear messages that (a) reflect the best available evidence and (b) are delivered in a way that maintains public trust.

Currently, individual scientists are incentivized to rapidly disseminate their findings, often at the expense of quality control (Higginson & Munafò, 2016). This can harm the reliability of scientific messages as well as public trust in them. Furthermore, scientific messages compete in a contested and complex online landscape that favors partisanship over reasoned debate (Lorenz-Spreen et al., 2020). Especially where evidence conflicts with political or commercial interests, organized efforts to misinform, sow public confusion, or advance conspiracy theories have distorted public discourse (Koehler, 2016), threatened evidence-based policy making (Vériter et al., 2020), and personally targeted individual prominent scientists (Mann, 2015). In this commentary, we argue that to combat the challenges of today’s information landscape, science communication must go beyond “one-person reporting” and harness the collective knowledge and expertise of many scientists worldwide to provide high quality information and engage with stakeholders. In short, we propose to approach science communication as a collective intelligence process.

In its broadest form, “collective intelligence” can be seen as a collaborative approach to problem-solving, typically supported by technological tools, which allows for real-time co-ordination and mobilization of knowledge that is distributed among many individuals (Suran et al., 2021). To some extent, the scientific process already embeds collective intelligence, as scientific knowledge is informed by reasoned argument between scientists, generating better outputs through peer evaluation and debate (Mercier, 2016). Here, we focus on harnessing the most advantageous characteristics of existing collective intelligence systems that would benefit science communication (see, e.g., online Supplementary Table). We explain why and how these characteristics could be an effective way to address specific obstacles present in the traditional, “one-person reporting” model of science communication.

Aggregating Distributed Knowledge

Collective intelligence can help science communication by aggregating knowledge that is distributed among individual scientists. First, aggregating data and evidence can build a more complete picture of the current state of scientific inquiry, leading to more confidence in the reliability of a scientific proposition. For example, distributed networks of laboratories can aggregate samples for an experimental protocol, spreading the time and labor costs of data collection and evidence syntheses (Coles et al., 2022). Monitoring and aggregating evidence can also increasingly be done in real time with new Artificial Intelligence (AI) tools, for example, using machine learning to screen databases for relevant evidence.

Second, aggregating independent expert judgments can mitigate bias in evidence interpretation and enhance accurate assessment. Furthermore, communicating judgments that fairly represent those of a collective avoids the false balance that may be presented if an audience only hears from a few, unrepresentative experts (Koehler, 2016). Showing the distribution of judgments can highlight when there is a consensus or, when judgments differ, it can illustrate the uncertainties involved in interpreting the available evidence and experts’ level of confidence in the state of knowledge. Critically, technologically supported aggregation methods allow experts to add their judgments independently, reducing the risk of biases that can be introduced through group processes.

Third, aggregating expert discourse, that is, discussion of the evidence, can showcase how reasoned argument between scientists informs scientific knowledge. This can be as critical as the evidence itself, especially in crisis situations where action must be taken as evidence emerges. New digital tools for judgment aggregation in the civic participation sphere provide comprehensive packages for debating, proposing and voting on initiatives and data (e.g., Po.lis, PSi, Loomio, Consul, Decidim). These could be leveraged for communicating scientific discourse.

There are of course costs to setting up aggregation systems. To aggregate data and evidence, protocols must be developed and shared with participating researchers. Evidence quality must also be assessed to avoid undermining the accumulated knowledge base with the inclusion of unreliable data (Royal Society, 2018). When aggregating judgments and discourse, the expertise of those who are contributing needs to be verified and contributors should be representative of their collective field of research, to avoid those with vested interests gaming the power of scientific consensus (Cook et al., 2018).

Despite the costs, aggregation is highly beneficial. Communicating in terms of the “collective accumulated evidence” shifts the message toward what the best available evidence indicates. This can help resist arguments that science has not “proved” an effect (Oreskes & Conway, 2010). It is also harder for those interested in discrediting science to carry out ad hominem attacks on collective evidence from a group of scientists (Mann, 2015). Furthermore, accumulated evidence can make a scientific consensus more visible, which is important because well-communicated scientific consensus has influenced decision-making, shifted the public’s attitudes, and strengthened calls for policy action across various domains (e.g., Bartoš et al., 2022; Budescu & Chen, 2014; Kerr & van der Linden, 2022), even for partisan individuals or those who tend to be predisposed toward rejecting scientific evidence (Lewandowsky et al., 2012). In areas where consensus has yet to form, aggregation can advance science by exposing areas in which further evidence is needed (Minas & Jorm, 2010).

Involving a More Diverse Group of Individuals

To optimize the quality of aggregated evidence and a scientific consensus, collective intelligence should increase the diversity of contributions. First, diversity in ideas (e.g., epistemic diversity) tends to invite greater scrutiny, increasing the robustness of scientific inquiry (Pesonen, 2022). Involving more diverse perspectives may help scientists challenge cognitive biases when seeking or interpreting evidence. Second, diversity in representation can boost the reach and effectiveness of science communication, especially when it comes to producing messages that the public trusts. Historically, a lack of diversity in science and research has perpetuated inequalities and contributed to the marginalization of voices from groups, such as women, minority groups, and citizens of countries in the Global South (Mertkan et al., 2017). This can undermine trust in science, especially among communities that experienced discrimination in the past (Woolf et al., 2021).

Diversity needs to be deliberately engineered because biases can easily be overlooked when values and norms are embedded into contemporary society. It is necessary to review processes, such as consensus-building, information gatekeeping, and sensemaking, and establish transparent frameworks to incorporate diversity in these processes (Thapar-Björkert & Farahani, 2019). For example, frameworks for inclusion can specify how experts will be invited or selected to contribute (e.g., by issuing invitations to all identified experts in the domain, regardless of their opinions on an issue). Although frameworks do not guarantee diversity, they make the lack of diverse representation more noticeable. A transparent framework for inclusion that discloses who the experts are and why they were chosen can also help verify expertise and avoid a “manufactured” collective scientific position from nonexperts (e.g., Cook et al., 2018).

Designing for diversity in the scientific collective also requires constructive spaces for deliberation, critique, and debate—discourse that is essential to knowledge-building—which support diverse participation. These spaces should be built around critiquing ideas rather than individuals, with recognized codes of conduct for respectful engagement. They should encourage scholars with opposing perspectives to collaborate rather than compete. Although there is no existing platform yet that promotes such behavior in online academic discourse, some researchers are considering how older methods to elicit, aggregate and discuss expert opinions could be harnessed as a model for shaping scientific discourse among diverse experts. Tools to scale up such processes could soon provide online infrastructure to visualize and convey the inputs to and outcomes of the consensus.

Increasing Public Participation

By definition, collective intelligence is participatory, leveraging the involvement of many individuals to produce outputs. Thus far, we have discussed the participatory input of experts in generating scientific knowledge that underpins science messages. However, science communication should also be informed by the people it will impact (Priest, 2018). Participatory input from citizens can help shape research to address the needs of those affected by it (Bruin & Bostrom, 2013). It can also generate interest and understanding from the public in how the research is conducted and evaluated (Bonney et al., 2015), thereby building trust in scientific messages (Bedessem et al., 2021). Increasingly, technological interfaces allow the public to participate in many ways. Participation can be active, for example, by acting as “citizen scientists” (Silvertown, 2009) or a mass monitoring system. The public can also passively inform scientists through their collective online discourse: such “social listening” has enabled science communicators to tackle misinformation outbreaks by targeting information provision to the public’s needs (World Health Organization, 2021).

The accessibility of scientific findings is a precondition to harvest some of the benefits of public participation, such as a more knowledgeable citizenry. Accessibility can mean making research available. Researchers are increasingly doing so through “pre-prints,” that is, draft-level papers submitted to a publicly accessible server. In theory, this gives the public early sight of findings, but pre-prints can be confused for scientific fact or weaponized to support a certain narrative (Bajak & Howeve, 2020). Hence, they should only be considered as emerging evidence in an aggregated system, and this needs to be clearly indicated on the pre-print platforms and papers. Accessibility also means making research comprehensible. Openly published articles (pre-prints or otherwise) often remain inaccessible to the public because of their technical language and general level of complexity, limiting informed discussion of these to scientists and small parts of the public (e.g., science journalists, think-tanks, and policymakers). Increasing accessibility could involve writing plain language summaries of papers (Stoll et al., 2022). It could involve supporting citizens’ skills to engage with information, identify good quality evidence, and spot misleading argumentation (Brodsky et al., 2021; Roozenbeek et al., 2022). Scientific publications could even be augmented with technological tools that indicate how findings correspond to the broader literature or how samples should be structured for this kind of research. Accessibility could also be enhanced with collective projects to communicate the state of the evidence in comprehensible language. Ultimately, scientists have a duty to make research available and comprehensible to the public that provides them with funding and academic freedom (Greenwood & Riordan, 2001).

Improving Responsiveness

It can be difficult to identify relevant evidence and judge its quality at a given point in time when it can emerge rapidly, especially during a crisis situation where scientists may accelerate research production and dissemination (Fraser et al., 2021). Collective intelligence can leverage technology to enable real-time information monitoring, thereby enhancing the responsiveness of science communication to updates and changes. Traditional evidence syntheses are lengthy processes that often exclude the most recent studies that were not published by the time the research was conducted. In contrast, AI can enable a dynamic evidence synthesis, with some promising examples already emerging across different domains. In such systems, after having established the criteria for subsequent studies to be included, researchers can regularly monitor new publications and update their syntheses in real time.

Collective intelligence could also increase the responsiveness of evaluating new information. Emerging scientific papers typically undergo independent critique, or “peer review,” but this process is notoriously slow. During the Covid-19 pandemic, researchers collectively responded by accelerating some peer review processes and, more commonly, openly sharing early-stage research as pre-prints. Not all rapid publication was helpful to the pandemic response, but some did provide valuable updates to inform decision-making (Fraser et al., 2021). Identifying and accelerating the review of better quality pre-prints could thus improve the responsiveness of science in times of crisis. A collective intelligence system could organize and support scientific evaluation of pre-prints, for example, by identifying potential reviewers through network analysis (Rodriguez & Bollen, 2008), or detecting information manipulation and erroneous statistical analyses (Henman, 2020).

However, AI cannot fully replace the human contributions needed for quality assurance. AI-supported tools to facilitate quicker pre-print or post-publication review by the scientific collective exist, but sustaining motivation to contribute collectively to this work over the longer term is difficult. This may in part be due to a lack of incentives. For example, academics often cite lack of time as the main reason for declining reviews, but it takes much less time to review a manuscript (typically hours) than to produce a new piece of research (typically months). Despite the critical contribution of peer review to the scientific process, it is not incentivised in the publication structure, nor by most employers. The same goes for maintaining contributions to consensus-building and communicating consensus. The recent Covid-19 crisis provided a glimpse of how a motivated scientific collective could produce, evaluate, and communicate research in a highly responsive fashion. However, this effort has been hard to sustain 2 years later. Harnessing the ability of collective intelligence in responding to crises and fast-paced research thus needs an overall structural change within the scientific community to better reward collective knowledge processes over individual efforts.

Implementing Collective Intelligence in Science Communication: An Example

This commentary is itself a product of our experience harnessing collective intelligence processes to create a “Manifesto for Science Communication as Collective Intelligence.” We used group discussions and interactive online discourse via the tool pol.is to gather insights from attendees at an open virtual workshop on the topic. We then invited everyone to craft the manifesto, either as co-ordinating lead authors (“CLAs,” n = 6) or contributing authors (n = 18). CLAs collectively voted on how to organize the points raised at the workshop. Each CLA then led a group of authors to draft a section of the manifesto. The CLAs condensed this draft into its key propositions and, using pol.is, all authors voted on which propositions from the draft were critical for the manifesto. Propositions with > 60% of votes were organized into the final Manifesto, which presented eight necessary features for science communication as collective intelligence. Altogether, we engaged a diverse group of researchers, captured and aggregated their judgments and discourse in an iterative fashion, and generated a consensus for communication. The full process is shared online as part of the Manifesto (https://scibeh.org/manifesto).

Conclusion

In this commentary, we highlighted the impetus for science communication to move away from a model where scientists disseminate individual findings and adopt a collective communication program that (a) develops messages from a wider base of aggregated evidence, judgments, and discourse, (b) is informed by a diverse community, (c) involves participation from stakeholders, and (d) is responsive to ongoing changes in the state of knowledge. In the online supplementary table, we provide examples that concretize how this new program would leverage collective processes, supported by participatory technology, in pursuit of a more collaborative form of science communication. While no single example (including our own) managed to harness all the advantages we describe in this commentary, they provide a glimpse of how collective processes are already enhancing the way in which scientists gather data, reach consensus, and communicate it. We hope that in the near future, more tools and examples will emerge to support a program of science communication as collective intelligence.

Supplementary Material

Supplementary information

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Holford, Fasce, Lewandowsky and Schmid were supported by the European Commission Research and Innovation Horizon 2020 Grant 964728 (JITSUVAX). Lewandowksy and Abels were supported by the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO). Lewandowsky was supported by the Humboldt Foundation through a research award. Hahn was supported by the Arts and Humanities Research Council (AHRC), UKRI. Herzog was supported by Deutsche Forschungsgemeinschaft (DFG) Grant 458366841 (POLTOOLS). Lehmann was supported by the Villum Foundation (34288). Nurse was supported by the Australian Government Research Training Program (RTP) scholarship. Sirota was supported by the European Commission Research and Innovation Horizon 2020 Grant 101016967 (YUFERING).

Biographies

Author Biographies

Dawn Holford is a senior research associate in the School of Psychological Science at the University of Bristol. She studies the sociocognitive aspects of communication and information processing. Her current work focuses on interventions for tackling misinformation, understanding the roots of anti-vaccination beliefs, and the role of empathy when communicating about vaccination, especially in health care settings.

Angelo Fasce is a philosopher and postdoctoral researcher in the Faculty of Medicine of the University of Coimbra. He studies the cognitive basis of anti-scientific beliefs (e.g., alternative medicine and psychology, climate change denial, conspiracy theories, and vaccine hesitancy) and authoritarianism from a theoretical and empirical perspective.

Katy Tapper is a reader (associate professor) in psychology at City, University of London. She is interested in how we change health-related behaviors, particularly those linked to healthy eating and weight management. Her experimental research is aimed at identifying and understanding variables that influence behavior. This helps inform her applied work that focuses on the development and evaluation of health interventions for both adults and children.

Miso Demko worked at the Dynamic Decision-Making Laboratory at Carnegie Mellon University as social media manager and research communicator. In this role, he explored the metaphor of the lab as a place of “knowledge-production,” by connecting the elements of theory, experimental procedure, and scientific discourse forums and innovating the standard research paper format to engage different communities (audiences) with the work of the lab.

Stephan Lewandowsky is a cognitive scientist at the University of Bristol. His research examines people’s memory, decision-making, and knowledge structures, with a particular emphasis on how people update their memories if information they consider to be true turns out to be false. His research currently focuses on the persistence of misinformation and spread of “fake news” in society, including conspiracy theories. He is particularly interested in the variables that determine whether or not people accept scientific evidence, for example, surrounding vaccinations or climate science.

Ulrike Hahn holds a chair in Computational Modeling in the Department of Psychological Sciences, Birkbeck College, University of London, where she directs the Center for Cognition Computation and Modeling.

Christoph M. Abels is a post-doctoral fellow at the Faculty of Human Sciences at the University of Potsdam. His research examines individuals’ susceptibility and resilience to misinformation, and how this evidence can inform policy making.

Ahmed Al-Rawi is an associate professor of News, Social Media, and Public Communication at the School of Communication at Simon Fraser University, Canada. He is the director of the Disinformation Project that empirically examines fake news discourses in Canada on social media and news media. His research expertise is related to social media, news, and global communication with emphasis on Canada and the Middle East.

Sameer Alladin is a doctoral student in the School of Psychological Science at the University of Bristol. He studies the interplay between gastric and neural physiology in disgust, using machine learning to align electrophysiological recordings from stomach and brain, and tests how these are impacted by pharmacological manipulations. He is also more broadly interested in decision-making, and music perception.

T. Sonia Boender is a health scientist and field epidemiologist at the Risk Communications group of the Robert Koch Institute, Germany’s national public health institute. Her applied research focuses on managing the infodemic, bridging epidemiology, and communication.

Hendrik Bruns is a policy analyst at the European Commission Joint Research Center, Brussels. He works on aspects related to the environment, climate change, and misinformation from a behavioral science perspective. His work focuses on generating behavioral insights to support policymaking.

Helen Fischer is a postdoctoral researcher at the Leibniz Institut für Wissensmedien, Tübingen, Germany. Her research focuses on the metacognitive processes involved in forming beliefs and spreading information about politicized science, such as climate change.

Christian Gilde is a professor of business at the University of Montana Western. He conducts research in the areas of management and consumer studies. His research translated (and translates) into numerous journal articles and books on business and consumer science. His current projects center around decision-making, collective intelligence, and consumer sociology.

Paul H. P. Hanel is a lecturer (assistant professor) in psychology at the University of Essex, UK. A significant part of his empirical work includes human values (e.g., freedom, loyalty, security). Currently, he is especially interested in similarities between groups of people (e.g., women and men) as well as the effects of working remotely on well-being and productivity.

Stefan M. Herzog is a senior research scientist in the Adaptive Rationality Research Center at the Max Planck Institute for Human Development, Berlin. He studies how to boost human judgment and decision-making by understanding human and machine behavior and how humans themselves understand machines. He combines insights and methods from cognitive science, collective intelligence (“wisdom of crowds”), heuristics, and algorithms. He also works on applications in digital environments, medical decision-making, and meteorology. He co-leads an initiative on reconfiguring behavioral science for crisis knowledge management in response to COVID-19 and other, future disruptive events.

Astrid Kause is a psychologist and holds a junior professorship for Sustainability Science and Psychology at the Leuphana University Lüneburg. She studies perceptions and communications of risks and uncertainty related to climate and the environment, climate policies and health. Her aim is to provide empirical evidence for transparent communications as well as behavioral interventions contributing to a rapid societal transition toward sustainability.

Sune Lehmann is a professor of networks and complexity science at the Technical University of Denmark. He is also a professor of data science at the Center for Social Data Science at the University of Copenhagen. He works with large-scale behavioral data covering topics, such as human mobility, sleep, academic performance, complex contagion, epidemic spreading, and behavior on Twitter.

Matthew S. Nurse is a doctoral student in science communication at the Australian National Centre for the Public Awareness of Science at the Australian National University. His research interests include strategic science communication, misinformation and behavioral sciences. He has more than 20 years of professional experience in communication strategy and emergency communication.

Caroline Orr is a behavioral scientist and postdoctoral research associate at the University of Maryland’s Applied Research Laboratory for Intelligence and Security (ARLIS). Her research focuses on cognitive security, health misinformation, behavior change theory, and the role of online experiences in radicalization and extremism.

Niccolo Pescetelli is assistant professor of cyberpsychology at the Collective Intelligence Lab at the New Jersey institute of Technology. He is the founder and chief scientist of PSi, a voice-based collective intelligence platform for large-scale stakeholders feedback and engagement. His research focuses on decision-making and information processes in social contexts. He is interested in how people interacting together share, transform and integrate information in order to make individual and collective decisions, often reaching outstanding results. He investigates the properties of networks of recursively interacting agents by studying their behavior and dynamics in opinion space.

Maria Petrescu is an assistant professor of marketing at Embry-Riddle Aeronautical University. She obtained her PhD in Business Administration and Marketing from Florida Atlantic University. Her research interests are related to marketing analytics, collective intelligence, and artificial intelligence in marketing. She is also co-editor of the Journal of Marketing Analytics.

Sunita Sah is director of Cornell University’s Academic Leadership Institute and a professor in the management and organizations group at Cornell University’s College of Business. A physician turned organizational psychologist, she has published widely on decision-making, ethical action, conflicts of interest, trust, and how we respond to outside influence.

Philipp Schmid is psychologist and postdoctoral researcher at the University of Erfurt, Germany. He studies the psychology of science denialism and health misinformation and aims to support people’s informed decision-making in health, for example, vaccination. He applies a persuasion psychology perspective to understand the impact of misinformation in health communication and to develop and evaluate promising interventions.

Miroslav Sirota is a reader (associate professor) in psychology at the University of Essex. His research interests include uncertainty and risk communication, social and cognitive processes underlying reasoning, medical decision-making with a focus on antibiotic expectations and use, and implementation of open science.

Marlene Wulf is a master student at the University of Tübingen studying Education Science and Psychology. She also works as a research assistant in the group “Adaptive Rationality” at the Max Planck Institute for Human Development in Berlin. Her current projects focus around the fields of decision-making, boosting, and misinformation.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Bajak A, Howeve J. A study said Covid wasn’t that deadly. The right seized it; 2020. https://www.nytimes.com/2020/05/14/opinion/coronavirus-research-misinformation.html . [Google Scholar]
  2. Bartoš V, Bauer M, Cahlíková J, Chytilová J. Communicating doctors’ consensus persistently increases Covid-19 vaccinations. Nature. 2022;606(7914):542–549. doi: 10.1038/s41586-022-04805-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bedessem B, Gawrońska-Novak B, Lis P. Can citizen science increase trust in research? A case study of delineating polish metropolitan areas. Journal of Contemporary European Research. 2021;17:304–321. doi: 10.30950/JCER.V17I2.1185. [DOI] [Google Scholar]
  4. Bonney R, Phillips TB, Ballard HL, Enck JW. Can citizen science enhance public understanding of science? Public Understanding of Science. 2015 October;25:2–16. doi: 10.1177/0963662515607406. [DOI] [PubMed] [Google Scholar]
  5. Brodsky JE, Brooks PJ, Scimeca D, Galati P, Todorova R, Caulfield M. Associations between online instruction in lateral reading strategies and fact-checking Covid-19 news among college students. AERA Open. 2021;7 doi: 10.1177/23328584211038937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bruin WBD, Bostrom A. Assessing what to address in science communication. Proceedings of the National Academy of Sciences of the United States of America. 2013;110:14062–14068. doi: 10.1073/PNAS.1212729110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Budescu DV, Chen E. Identifying expertise to extract the wisdom of crowds. Management Science. 2014;61:267–280. doi: 10.1287/MNSC.2014.1909. [DOI] [Google Scholar]
  8. Coles NA, Hamlin JK, Sullivan LL, Parker TH, Altschul D. Build up big-team science. Nature. 2022;601(7894):505–507. doi: 10.1038/d41586-022-00150-2. [DOI] [PubMed] [Google Scholar]
  9. Cook J, van der Linden S, Maibach E, Lewandowsky S. The consensus handbook. Center for Climate Change Communication; 2018. https://www.climatech-angecommunication.org/the-consensus-handbook/ [Google Scholar]
  10. Fraser N, Brierley L, Dey G, Polka JK, Pálfy M, Nanni F, Coates JA. The evolving role of preprints in the dissemination of Covid-19 research and their impact on the science communication landscape. PLOS BIOLOGY. 2021;19:e3000959. doi: 10.1371/JOURNAL.PBIO.3000959. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Freling TH, Yang Z, Saini R, Itani OS, Abualsamh RR. When poignant stories outweigh cold hard facts: A meta-analysis of the anecdotal bias. Organizational Behavior and Human Decision Processes. 2020;160:51–67. doi: 10.1016/J.OBHDP.2020.01.006. [DOI] [Google Scholar]
  12. Greenwood MR, Riordan DG. Civic scientist/civic duty. Science Communication. 2001;23:28–40. doi: 10.1177/1075547001023001003. [DOI] [Google Scholar]
  13. Hanel PH, Mehler DM. Beyond reporting statistical significance: Identifying informative effect sizes to improve scientific communication. Public Understanding of Science. 2019;28:468–485. doi: 10.1177/0963662519834193. [DOI] [PubMed] [Google Scholar]
  14. Harold J, Lorenzoni I, Shipley TF, Coventry KR. Cognitive and psychological science insights to improve climate change data visualization. Nature Climate Change. 2016;6(12):1080–1089. doi: 10.1038/nclimate3162. [DOI] [Google Scholar]
  15. Henman P. Improving public services using artificial intelligence: Possibilities, pitfalls, governance. Asia Pacific Journal of Public Administration. 2020;42:209–221. doi: 10.1080/23276665.2020.1816188. [DOI] [Google Scholar]
  16. Hertwig R, Grüne-Yanoff T. Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science. 2017;12(6):973–986. doi: 10.1177/1745691617702496. [DOI] [PubMed] [Google Scholar]
  17. Higginson AD, Munafò MR. Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLOS BIOLOGY. 2016;14:e2000995. doi: 10.1371/JOURNAL.PBIO.2000995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kerr JR, van der Linden S. Communicating expert consensus increases personal support for Covid-19 mitigation policies. Journal of Applied Social Psychology. 2022;52:15–29. doi: 10.1111/JASP.12827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Koehler DJ. Can journalistic “false balance” distort public perception of consensus in expert opinion? Journal of Experimental Psychology, Applied. 2016;22:24–38. doi: 10.1037/XAP0000073. [DOI] [PubMed] [Google Scholar]
  20. Lewandowsky S, Gignac GE, Vaughan S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change. 2012;3(4):399–404. doi: 10.1038/nclimate1720. [DOI] [Google Scholar]
  21. Lorenz-Spreen P, Lewandowsky S, Sunstein CR, Hertwig R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour. 2020;4(11):1102–1109. doi: 10.1038/s41562-020-0889-7. [DOI] [PubMed] [Google Scholar]
  22. Mann ME. The Serengeti strategy: How special interests try to intimidate scientists, and how best to fight back. Bulletin of the Atomic Scientists. 2015;71:33–45. doi: 10.1177/0096340214563674. [DOI] [Google Scholar]
  23. Martínez A, Mammola S. Specialized terminology reduces the number of citations of scientific papers. Proceedings of the Royal Society B. 2021;288:2020–2581. doi: 10.1098/RSPB.2020.2581. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Mercier H. The argumentative theory: Predictions and empirical evidence. Trends in Cognitive Sciences. 2016;20:689–700. doi: 10.1016/J.TICS.2016.07.001. [DOI] [PubMed] [Google Scholar]
  25. Mertkan S, Arsan N, Cavlan GI, Aliusta GO. Diversity and equality in academic publishing: The case of educational leadership. Compare: A Journal of Comparative and International Education. 2017;47:46–61. doi: 10.1080/03057925.2015.1136924. [DOI] [Google Scholar]
  26. Minas H, Jorm AF. Where there is no evidence: Use of expert consensus methods to fill the evidence gap in low-income countries and cultural minorities. International Journal of Mental Health Systems. 2010;4(1):1–6. doi: 10.1186/1752-4458-4-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Oreskes N, Conway E. Merchants of doubt—How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury Publishing; 2010. [Google Scholar]
  28. Pesonen R. Argumentation, cognition, and the epistemic benefits of cognitive diversity. Synthese. 2022;200(4):1–17. doi: 10.1007/S11229-022-03786-9. [DOI] [Google Scholar]
  29. Priest S. In: Ethics and practice in science communication. Priest S, Goodwin J, Dahlstrom MF, editors. University of Chicago Press; 2018. Communicating climate change and other evidence-based controversies: Challenges to ethics in practice; pp. 54–73. [Google Scholar]
  30. Rodriguez MA, Bollen J. An algorithm to determine peer-reviewers; Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM ‘08; New York, NY, United States. 2008. pp. 319–328. [DOI] [Google Scholar]
  31. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Science Advances. 2022;8:6254. doi: 10.1126/SCIADV.ABO6254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Royal Society. Evidence synthesis for policy. 2018. https://royalsociety.org/topics-policy/projects/evidence-synthesis/
  33. Silvertown J. A new dawn for citizen science. Trends in Ecology and Evolution. 2009;24:467–471. doi: 10.1016/j.tree.2009.03.017. [DOI] [PubMed] [Google Scholar]
  34. Sirota M, Juanchich M. Ratio format shapes health decisions: The practical significance of the “1-in-X” effect. Medical Decision Making. 2019;39:32–40. doi: 10.1177/0272989X18814256. [DOI] [PubMed] [Google Scholar]
  35. Stoll M, Kerwer M, Lieb K, Chasiotis A. Plain language summaries: A systematic review of theory, guidelines and empirical research. PLOS ONE. 2022;17:e0268789. doi: 10.1371/JOURNAL.PONE.0268789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Suran S, Pattanaik V, Draheim D. Frameworks for collective intelligence. ACM Computing Surveys (CSUR) 2021;53(1):1–36. doi: 10.1145/3368986. [DOI] [Google Scholar]
  37. Thapar-Björkert S, Farahani F. Epistemic modalities of racialised knowledge production in the Swedish academy. Ethnic and Racial Studies. 2019;42:214–232. doi: 10.1080/01419870.2019.1649440. [DOI] [Google Scholar]
  38. Vériter SL, Bjola C, Koops JA. Tackling Covid-19 disinformation: Internal and external challenges for the European Union. The Hague Journal of Diplomacy. 2020;15:569–582. doi: 10.1163/1871191X-BJA10046. [DOI] [Google Scholar]
  39. Woolf K, McManus IC, Martin CA, Nellums LB, Guyatt AL, Melbourne C, Bryant L, Gogoi M, Wobi F, Al-Oraibi A, Hassan O, et al. Ethnic differences in SARS-COV-2 vaccine hesitancy in United Kingdom healthcare workers: Results from the UK-REACH prospective nationwide cohort study. The Lancet Regional Health—Europe. 2021;9:100180. doi: 10.1016/J.LANEPE.2021.100180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. World Health Organization. An overview of infodemic management during Covid-19, January 2020-May 2021. 2021. https://www.who.int/health-topics/infodemic#tab=tab_1 .

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary information

RESOURCES