Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2025 Jun 30;122(27):e2400928121. doi: 10.1073/pnas.2400928121

Our changing information ecosystem for science and why it matters for effective science communication

Nicole M Krause a,1, Isabelle Freiling b, Dietram A Scheufele a,c
PMCID: PMC12260430  PMID: 40587798

Abstract

Current information ecologies present unique opportunities to communicate science and engage diverse publics in science. Unfortunately, they also present unique challenges. Here, we outline how the public sphere for science is transforming as media evolve, and we connect these changes to the high-stakes issue context of COVID-19. We argue that scientific organizations’ struggles to adapt to evolving media are linked, in part, to asymmetries in which social media platforms prevent researchers from producing reliable data that could inform institutional change and improve science communication. This has been apparent in studies of echo chambers and filter bubbles. Producing a more usable evidence base, we conclude, will require that scholars a) obtain access to proprietary data, b) reconceptualize information ecologies as social systems, c) avoid ceding core research tasks to platforms, d) address ethical issues, and e) grapple with the urgency of moving forward productively.

Keywords: science communication, social media, emerging information ecologies, research ethics, misinformation


For decades, online information environments have presented the institutional ecosystem of modern science with challenges and opportunities (1) for developing “different and closer links with the general population” (2). As communication media proliferate and transform, the public sphere for science is no longer a carefully choreographed interplay of traditional science journalism and outreach by scientific institutions. Instead, public debates about science are also shaped by an ever-changing amalgam of celebrity scientists’ tweets, advocacy groups’ YouTube videos, scientists’ Substacks, and conspiracy theorists’ podcasts—all of which are produced, curated, and amplified in fluid landscapes of online platforms.

The ability to understand and navigate this sprawling information ecosystem is a prerequisite for effective public engagement with science and science communication more broadly. Designing and assessing evidence-informed strategies to leverage the benefits and avoid the risks of current information ecologies should be seen, now more than ever, as a key objective for the scientific community (3). Unfortunately, researchers have struggled to produce a coherent, reliable, and actionable evidence base that could readily inform adaptations and decision-making, such as changes to scientific institutions’ training or engagement initiatives, updates to funders’ theories of change, or, more broadly, debates among policymakers about how (science) information and media could be regulated in ways that reflect and sustain democratic norms.

In short, we do not know enough about current information ecologies to enact effective, evidence-based science communication, which we define as “an empirical approach to defining and understanding audiences, designing messages, mapping communication landscapes, and—most importantly—evaluating the effectiveness of communication efforts” (4). For example, we know that social media facilitate new connections with (science) audiences in part by altering patterns of information seeking and consumption (5). However, as scientists have seized these opportunities (e.g., ref. 6), some research has shown upsides of new media—such as lower science-related knowledge gaps versus legacy media (7, 8)—while other work has shown downsides, including that uncivil comments on science news can reduce readers’ belief in scientists’ integrity (9).

As an expanding body of work catalogs ambivalent effects, the task of describing and reconciling disparate insights is a serious challenge for translating evidence into action. Below, however, we argue that research in this area also faces a series of even bigger challenges, including lagging theoretical work and conceptual slippage, methodological limitations related to data access constraints, and imbalances of power in academia–industry collaborations. Until we correctly diagnose these issues and their underlying causes, virtually all efforts to design and evaluate science communication or public engagement initiatives—regardless of their goals (10)—will struggle with the tectonic shifts induced by rapidly changing information ecologies and the lack of a clear evidence base about how best to navigate them.

As we discuss “new media” in this essay, our goal is not to demarcate transitioning media eras, but, rather, to surface a set of challenges for science communication research and practice that arise both from the current constellation of media technologies and from a host of societal factors related to the use, ownership, and regulation of an evolving media environment. First, we outline key features of the current (mediated) public sphere for science that can make communication difficult even at the best of times but which become especially challenging in the context of “wicked” issues, or those issues which are likely to require difficult trade-offs across stakeholders whose values and priorities diverge (3, 11). The COVID-19 pandemic was one such issue, and it highlighted how ill-prepared or underresourced scientific institutions are in terms of effectively leveraging current media ecologies to support nimble risk mitigation initiatives or scalable democratic deliberation with respect to high-stakes, uncertain science.

After sketching a landscape of challenges facing scientists and science communicators, we argue that science’s lagging or insufficient adaptation to emerging media ecologies is linked, in part, to researchers’ long-standing struggles to produce a clear evidence base to guide change. Many of these struggles are not new, but, as the United States and societies worldwide face risks from advances in issues like AI or human genome editing (HGE), we cannot afford to ignore the “elephants in the room” that have long hindered the study of modern media ecosystems as a key apparatus in the public sphere for science. We conclude with pathways forward for science communication research and practice, broadly arguing that efforts to reimagine science communication in the COVID-19 era and beyond will require resolving known issues in academic research, as well as a deep commitment to changes within science itself.

Emerging Information Ecologies and the Public Sphere for Science

The trajectory of current information ecologies presents unique challenges for science communication, and a broad theme which unites these challenges is that communication between scientists and different publics does not occur in a vacuum. Although it has always been important for communicators to look beyond message content to external factors that can influence desired outcomes (12), constantly changing information ecologies intensify and expand longstanding concerns about competition and contextualization. Messages from scientists and scientific institutions now compete not only with the alternative framing strategies of journalists (12) but also with “science influencers” and “celebrity scientists” (13, 14), as well as a deluge of nonscientific content and a still-debated quantity of misinformation (15) from communicators who may be much savvier or better financed than scientists.

In this competitive landscape, science communicators can expect to exert limited control over the presentation and movement of their messages, as audiences and proprietary algorithms (re)circulate and transform science content across a multimodal information system. Below, we highlight a series of key considerations for science communication in current media ecologies, focusing on those which showcase how the public sphere for science is now more complex in the scope of its activity and the acceleration of its transformations than ever before.

Multimodality and Multitasking.

Current science communication efforts exist in a multimodal communication landscape, where audiences often function in states of distraction as they engage in “media multitasking” (16). Science audiences should thus be expected to “move” from, for example, their Facebook feeds to articles in The Economist or HuffPost, or to YouTube videos and Reddit threads, and to do all of this while they are also engaged in (related) face-to-face interactions, private text messaging conversations, or while they are consuming print media, cable TV news, and their favorite podcasts.

With audiences relying on diverse amalgams of content and channels (17), science communicators cannot realistically capture end-to-end processes of content selection, consumption, sharing, and response on a single platform. Moreover, given that technological progress will continue to alter modes and patterns of interaction, studies of (science) communication in new information ecologies can expect to increasingly struggle with issues of ecological validity and the generalizability of findings across contexts, cultures, and time. Attempts to correct misperceptions, for example, may not replicate well across competitive framing environments or when multitasking media users are in states of distraction (e.g., ref. 18).

Optimization for Engagement and User Retention.

Given the proliferation of media platforms and modalities (19), information producers, media organizations, and social media platforms are economically incentivized to design and display information in ways that will optimize for user engagement and retention (3, 20, 21). Essentially, these organizations’ survival depends in part on the use or sale of users’ data, which can be most easily obtained by a) ensuring that users do not leave their platform and b) by encouraging users to “engage” with information on their platform in ways that will meaningfully signify users’ preferences and emotional states, such as liking, sharing, or commenting on content (22).

Optimizing for retention and engagement can mean a variety of things. For example, some work suggests that users tend to engage more with content that aligns with their interests (23), facilitates moral outrage (24, 25), uses moral–emotional language (26, 27), is critical (28), or is negative (29). Data on user engagement, however, are often proprietary and inaccessible to researchers (a problem we discuss below), which means that research in this area often struggles to determine whether the links between these independent variables and heightened engagement are driven by user choices and psychology or are, instead, artifacts of algorithmic content display or the design of interfaces. For instance, interfaces enable or disable certain actions (e.g., “liking” is possible on Facebook, but “disliking” is not), which constrain behaviors and shape user habits as these systems also reward particular forms of engagement (e.g., likes and comments) (30). Consequently, it is likely that the answer to the above question about predictors of heightened engagement is some combination of all the explanations listed (31, 32).

As the algorithms underpinning new media platforms determine how to prioritize content for maximal user engagement and retention, science-related information will struggle to compete. Again, emotionally evocative information induces more engagement—even for science content (33)—but scientific institutions may be hesitant to use emotion, perhaps viewing it as a deviation from “the facts” or a persuasive effort that may seem to violate scientists’ role as neutral arbiters of information (34). However, when much of the content in the media landscape is emotional, “sticking to facts” can mean that other claims, including misinformation or pseudoscience, could reach people more frequently or effectively than scientists’ claims (34). Further, in contexts where science communicators aim to spark tolerant dialogue across social groups on morally charged topics—e.g., stem cell research—the use of emotionally compelling personal narratives may be more effective than facts alone (35). Debates continue, however, as to how science communicators can effectively use emotion as well as personal narrative alongside facts in ways that keep public discourse about science “grounded in reality” (36).

Scientific “Authority” as Gatekeeping Evolves.

Given that the infrastructural logic of new media is to optimize for user engagement, these environments are not guaranteed to elevate scientists’ voices over other actors, even on scientific issues (3). Assertion of expert authority can be, therefore, more challenging than in the past. Prior to new media, journalistic practices dictated that science news coverage should involve consultation with experts, with newsroom norms and editorial gatekeeping thereby elevating scientists’ epistemic authority and controlling some degree of competition over science-related claims in the public sphere. With many legacy news outlets cutting their science sections (37), however, scientists have been forced to find more direct ways to communicate their research without the benefit of journalistic norms to reinforce their authority (12).

Among the new modes of more direct communication are scientists’ social media posts or comments, the dissemination of research reports via laboratory-maintained websites, and publications by scientists themselves in magazines like American Scientist or The Conversation. New media have facilitated more bidirectional communication through public interaction with scientists, but the increasing sophistication of algorithms and the proliferation of platforms—coupled with lower barriers to content creation for other actors—has intensified competition. Scientists must compete in an attention economy in which they are just one of many voices, and in which proprietary algorithms now serve as a new kind of gatekeeper.

Spillover, (Re)Contextualization, and Transformation.

In light of the multimodal ways that individuals now encounter information, make sense of it, and discuss or share it with others, information should be expected to “spill over” from one context or platform to another. Science audiences might, for instance, repost content from an institution like the Centers for Disease Control and Prevention (CDC) in variously open or closed messaging environments, such as WhatsApp threads or Facebook groups (e.g., ref. 38), with their own critical commentary, emotional cues, and interpretations that can influence the science-related attitudes and beliefs of other group members, including their perceptions of scientists and scientific institutions.

The spillover of messages through new media can be difficult—if not impossible—to track and control. Once a message is released “into the wild,” science communicators will have a limited ability to monitor the lifecycle of their content. While some multiplatform spillover is possible to observe with “trace data” about users’ activity, these data may be difficult to obtain, and they will be nonexistent for offline interactions or private conversations. Generally, the collection and use of trace data, although common, is also replete with ethical and practical limitations (39) that we will discuss later.

Blurring of Scientists’ Professional and Personal Speech.

As science communications in current media environments are contextualized by social cues and sometimes come directly from scientists via their personal accounts, there can be implications for publics’ perceptions of scientists as a social group, as well as for people’s beliefs about whether scientists are overstepping the perceived bounds of scientific authority. These outcomes matter because, if people view scientists as a “threatening” outgroup, they can begin to view scientists as less reputable and less competent, possibly supporting punitive actions against them (40, 41). Further, perceptions of scientists as allied with some groups and not others can implicate scientists in existing social conflicts, such as political polarization (42, 43).

For example, some research suggests that science influencers—i.e., influential scientists or other actors such as science journalists, who have large followings online—can have polarizing effects on public opinion of science in controversial issue contexts, in part through these actors’ use of social cues that facilitate perceived “ingroup” solidarity with some audiences while alienating others (13). Similarly, messages from celebrity scientists (e.g., Richard Dawkins or Neil deGrasse-Tyson (44)) can influence audiences’ perceptions of the social allegiances and associated political, moral, or cultural “agendas” that scientists seem to have. For example, “science–atheism associations” in celebrity scientists’ communications have been shown to lower trust in scientists or belief in evolution among some religious groups, in part by raising concerns that scientists’ moral beliefs are misaligned with those of their communities (45, 46).

Among the many concerns that scientific institutions must navigate in changing information ecologies, therefore, is emerging evidence that scientists’ direct communications—including their personal sociopolitical commentary—could negatively influence publics’ perceptions of science, especially when their personal claims draw on their identities or expertise as scientists.

COVID-19: Uncertain, High-Stakes Science Meets Current Information Ecologies

Beyond the above challenges that current information ecologies pose for science communication in general, these environments also pose (or at least exacerbate) a set of additional challenges in the context of high-stakes, uncertain issues that have been referred to as “accelerated wicked issues,” with the science pertaining to COVID-19 being one example (11). Below, we surface three of these challenges, with recognition that there may be more.

At the root of the challenges we outline is the idea that wicked issues—including COVID-19, AI, HGE, neurological chimeras, and others—are characterized by far-reaching consequences as well as varied degrees and forms of uncertainty about both desired outcomes and unintended consequences (47). The intersecting complexities that define such issues tend to raise a host of ethical, moral, regulatory, and policy questions that science cannot answer, and which require input from diverse stakeholders far beyond scientists, ethicists, or legal experts (48). Notably, among the considerations in these debates is that some stakeholders will say “no” to continued research and development on certain topics or to certain policy proposals (49).

More and more of this badly needed public contestation about science is mediated (3). As these debates develop, stakeholders offer competing message frames about pathways forward, the upsides and downsides of applications, or implications for regulatory policy (50), thus amplifying concerns about competitive, multimodal communication landscapes. In the case of COVID-19, we saw exactly how messy an evolving debate over new, uncertain science can be in current information ecologies, especially when scientific research and communication is subject to strong external pressures and a demand for speed. As we explain below, three intersecting factors created a “perfect storm” for science communication pertaining to COVID-19.

Accelerated, Highly Visible Accumulation of Evidence.

Science moved at high speed during the pandemic (51). Journal editors, for example, expedited peer review processes in order to quickly publish pandemic-related research. At the outset of the crisis, it was clear that expedited publication processes would mean that some papers would later be retracted (11), which indeed happened for two high-profile papers in The New England Journal of Medicine and The Lancet on antimalarial and blood pressure drugs as related to COVID-19 (52).

Scientific findings related to COVID-19 were also communicated publicly, especially online, at earlier stages of knowledge development than is common for less urgent issues. Specifically, members of the public and policymakers watched as scientists engaged in an accelerated version of the messy, self-correcting work—known as the “accumulation of evidence” (53)—that is crucial to early research but is typically conducted with little or no publicity. As COVID-19 spread around the world, accelerated accumulation of evidence occurred under immense public pressure and scrutiny, with implications for ambient (political) contestation (54).

Policy Input Based on Uncertain Science in Media Systems that Reward Controversy.

As explained earlier, science’s value in decision-making about emerging, high-stakes science is necessarily limited, in the sense that science alone cannot say what society “should” do amid a complex landscape of policy-relevant considerations. During the pandemic, for example, arguments and evidence pertaining to public health were contested alongside concerns about increased domestic violence or substance abuse linked to social distancing (55). As these discussions developed, some scientists were prescribing societally disruptive policies based on inconclusive evidence, including via their personal social media accounts (11, 21).

Given that social media (re)contextualize scientists’ statements in terms of social cues—and given that these platforms can incentivize outrage, conflict, and controversy—scientists’ direct, policy-relevant communications based on uncertain science in current information ecologies can be precarious. Not only are the most outrageous assertions likely to be prioritized for display in ways that also incentivize scientists to hype their studies (56), but scientists’ assertions will also be read by audiences alongside social cues that some research suggests can cause even “nonpolitical” content to read as political or as implicated in identity-based disputes (e.g., ref. 57).

The Temptation to “Defend Science” Amid Societal Contestation.

In addition to the subtler ways that new media can politicize scientists’ statements or implicate them in ambient conflicts, societal contestation over uncertain science may also tempt scientists to explicitly “take sides” as a social group. For example, the line between scientists’ professional and personal statements was blurred during the pandemic when Nature (58), The Lancet Oncology (59), and Scientific American (60) all released editorials that criticized then-president Donald Trump’s handling of the COVID-19 response in the United States and explicitly endorsed President Biden. These editorials went against cautions expressed early in the pandemic about how “science must get political without getting partisan” (11). In fact, a study on Nature’s political endorsement found that exposure to it decreased Republicans’ confidence in the scientist community and, overall, can polarize partisans’ confidence (61).

Combining the challenges inherent to science communication described at the outset of this essay with the uncertain and high-stakes science of the pandemic led to a perfect storm for science. We therefore need greater reflexivity within science as a first step toward addressing these combined challenges. Unfortunately, however, the scientific community’s attempts to improve their communications in evidence-informed ways will be impeded by serious limitations in academic research on changing information ecologies.

New Scientific Standards for Studying Changing Information Ecologies?

Although academic research in science communication (and communication science more broadly) has long grappled with the implications of changing information ecologies, few findings replicate reliably across disciplines, methodologies, or social contexts. Below, we present a set of key challenges in the academic study of current information ecologies, all of which hinder researchers’ ability to provide usable, reliable insights that can inform scientific institutions’ communication strategies or engagement. Before outlining these challenges, we highlight an example of a topic—the issue of audiences tending to see content they agree with in modern media environments—about which clear insights have eluded academics.

One explanation for people mostly seeing content they agree with are echo chambers. Echo chambers are belief-consistent information biospheres that result from people choosing likeminded ideological or otherwise congruent interpersonal and mediated sources. This self-sorting is partly a function of selective exposure to belief-consistent information online (62), with users seeking likeminded others to follow (63) and not following (or even unfollowing) nonpreferred sources (62).

The idea of selective exposure is not new and has been well documented for decades in work from psychology to communication science (64, 65). What is much less well understood, however, is whether this phenomenon is more pronounced or widespread in emerging online environments as compared to traditional, offline environments. Later, we discuss some of the data-related difficulties to systematically diagnosing phenomena like echo chambers in social media. For now, part of the answer may have to do with evidence that echo chambers are more likely to form on some platforms than others (66). In other words, some platforms may provide better tools for users to actively self-select into belief-consistent communities.

The other explanation for people seeing likeminded content is that online platforms algorithmically promote homogenous sorting, perhaps to attract users and relevant advertisers in ways that would not (or could not) occur in nonalgorithmic information ecologies. This explanation was popularized by Eli Pariser (67) under the label “filter bubbles.” While echo chambers and filter bubbles thus describe the same outcome—i.e., the formation of homogenous online information environments based on personal and political preferences—filter bubble theorizing attributes the outcome to algorithmic prioritization and other factors controlled by platforms, while echo chamber research attributes it to individuals’ choices.

Currently, we have limited empirical evidence to enable delineation between these explanations (20). Overcoming this problem, however, is an urgent challenge for at least three reasons. First, diagnosing and quantifying the relative impacts of user behavior as compared to platform modification of information dynamics has immense policy implications when it comes to regulating and protecting the flow of scientific information on social media. Second, if it turns out that users’ agency is rather limited, then efforts to “inoculate” citizens against (or to “prebunk”) misinformation might have limited day-to-day impact (21). Finally, understanding the relative effects of user behavior, algorithmic curation, and other platform modalities on how scientific information is shared, consumed, and repurposed among different stakeholders is a necessary condition for effectively navigating changing information ecologies.

For a variety of reasons, researchers lack sufficiently clear data to facilitate an evidence-based conversation about “whether social media platforms that are designed to monetize outrage and disagreement among users are the most productive channel for convincing skeptical publics [about settled] science” (3). Producing the necessary evidence will require overcoming five challenges currently faced by the scholarly community, which we outline below.

Challenge 1: Scholarly Access to Proprietary Platform Data.

Researchers who want to study social media do not have consistent access to most user, platform, and content data. There is a widening consensus that evidence-based policy and regulation “[requires] better access to data […as…] only tech companies can tell who is reading what” (68). Indeed, tensions between “private-sector norms of flexibility, efficiency, and profitability” and “traditional scholarly ones of reliability, critical scrutiny, and openness” (69) are now an urgent problem.

In 2019, for example, a number of philanthropies—including the Laura and John Arnold Foundation, The Charles Koch Foundation, and The Alfred P. Sloan Foundation—threatened to end a collaboration with Facebook that sought to understand the influence of misinformation on US elections, citing Facebook’s reluctance to “address the inaccessibility of the proprietary data increasingly necessary for robust research” (70, 71). Similarly, Senator Chris Coons called for “reasonable standards for disclosure and transparency” in a 2024 hearing of the US Senate Committee on the Judiciary, arguing that, “if we’re going to legislate responsibly about the management of the content on your platforms, we need to have better data” (72).

More specifically, because platforms restrict access to individual-level user data as well as data about the proprietary algorithms used for information curation and targeting, social scientists who study modern information ecologies are often forced into (combinations of) three “devil’s bargains.” The first is to use “proxy” research designs that simulate selective instances of online behavior or information exposure in traditional experimental or other behavioral science studies. A second compromise is to design research primarily on platforms such as Twitter/X, which do share some data with researchers or that allow for scraping of public data from their platforms. This “Drunkard’s Search” approach (73) leads to studies that primarily examine phenomena or platforms that are readily available for observation rather than prioritizing phenomena that are most relevant to societal issues or policy. A third compromise is for researchers to collaborate with social media platforms, which sometimes means granting industry partners control over the coding of variables and the types of primary data that are shared or not shared, among other things. In such collaborations, industry partners are likely to have a vested policy-related interest in the results (74), raising concerns about conflicts of interest (COIs) as well as questions about compromised quality standards related to replicability and reproducibility (75).

Regardless of which compromise researchers strike, our ability to study changing information ecologies is severely limited to (and by) data access considerations. For example, as Fig. 1 illustrates, in order to determine how social media content can affect users, researchers need to know which pieces of content users actually saw, what other (potentially contradictory) information it was surrounded by on users’ timelines, and what kinds of social endorsements or comments accompanied it at time of exposure. Unfortunately, raw social media data tend to be “black boxed,” with platforms providing only limited access to carefully curated subsets of data via Application Programming Interfaces (APIs). Platforms modify APIs continually, often with little notice about the nature and timing of those tweaks, thus limiting the replicability of datasets and analyses. Further, given the proprietary nature of APIs, researchers are often unable to identify and document what data were excluded from their queries, significantly impacting the generalizability and interpretation of their results (76).

Fig. 1.

Fig. 1.

What social science can(not) see.

Unfortunately, seemingly “open” data scraped from public content or gleaned from APIs can also be confounded (76). Readily observable online interactions between users and content are likely distorted by algorithmic curation, timeline prioritization, recommender systems, social endorsements from within and outside of users’ networks, A/B tests (i.e., testing two versions of a headline or an advertisement on different people to see which works best), and countless other problems. These confounds potentially impact both user behavior and information flows in ways that researchers are unable to imagine, much less to capture and control for. In the case of platforms like TikTok—which has become an important news source, especially for younger audiences (77)—this “black box” problem goes even further, with content moderation and censorship raising additional issues of data transparency and access (78).

Unsurprisingly, the insights resulting from proxy research designs with select datasets struggle to illuminate a broader understanding of science information ecosystems, as they fail to account for the confounds that researchers cannot see, and as platforms’ data disclosure choices leave researchers guessing as to what data they are (unwittingly) denied for a particular project (76).

Challenge 2: Reconceptualizing Information Ecologies as Social Systems.

As researchers are forced to study primarily what is available using proxy designs, our research questions are necessarily narrowed. For example, we have tended to study isolated phenomena confined within a single platform—especially X and Facebook—despite knowing that we also need to situate idiosyncratic, platform-specific insights into larger systems of (mediated) communication and interaction. Little research about science communication in current information ecologies, for example, has focused on the platforms or interaction environments that are most difficult to study, in part because researchers may not even be able to enter those digital spaces (such as invite-only apps or private groups). So-called “alternative” social media such as Truth Social, Telegram, or Stormfront are but three examples of spaces where research has been scarce (79), and closed groups on more mainstream messaging apps like WhatsApp are another.

To understand how these omissions can limit our understanding of important social phenomena, consider the movement of many Trump supporters to apps like the now-defunct Parler when Trump was banned from X. Without studying Parler—and without doing so in concert with data from X, as well as an empirical sketch of the offline interactions between relevant individuals—it is difficult to answer basic research questions: What made Parler attractive to this group? Did interactions on X play a role in this shift, or were offline encounters more influential? And so on.

While some recent science communication research has begun to address a broader range of platforms (e.g., ref. 80), we still need more attention to understudied technologies, as well as an increase in multiplatform studies (and methodological insights about how best to conduct them). Until we can more rigorously study end-to-end, multimodal processes of users’ (science) attitude and belief formation processes, we will lack clear answers to key research questions, such as the role of social media in the uptake and spread of science-related misinformation.

Challenge 3: Core Research Tasks Cannot be Ceded to Platforms.

To overcome some of these issues, researchers occasionally collaborate with platforms. In those collaborations, platforms sometimes define relevant data (e.g., they identify instances of “misinformation”), and, based on their definition, they select content to share with academic collaborators. At its most extreme, this practice can cede the core research tasks of conceptualization and operationalization to industry actors, raising questions about COIs, among other issues.

Even in cases where researchers have some say in how concepts are defined and operationalized, the scholarly community lacks clear agreement among researchers, peer reviewers, journal editors, and other actors about the academic value of studies that may be sacrificing control over conceptualization and measurement in favor of novel and otherwise inaccessible data. The fact that other researchers are unable to reproduce these studies due to data access constraints further complicates our assessments about the value of this work while also raising questions about research ethics (Challenge 4: Ethical Challenges in Research on Current Information Ecologies).

The scale of this problem is growing along three interrelated dimensions. First, even studies that rely on industry-provided data often lack controls for the individual-level and platform-related confounds discussed earlier. Among a recent set of papers resulting from a 2020 US election project initiated and funded by Meta (all published with coauthorship by Meta employees), many analyses relied on data coded by Meta without oversight or input from outside researchers and did not control for algorithmic confounds, nor for many of the user variables by which Facebook and Instagram routinely target content on individual timelines or across groups (74, 8183). In other words, not only are none of the studies replicable by other researchers, but it is also unclear whether the results are accurate depictions of current information ecologies.

Second, these concerns are confounded by COIs emerging from how industry partners control not only data access and interpretation but also scientific discourse more broadly. Last year, for example, Meta’s press release regarding the results of the 2020 election project promoted an interpretation of the study designs and their outcomes which suggested that Facebook and Instagram essentially have no negative effects on democratic systems, in terms of exacerbating polarization or meaningfully influencing political behavior (84). Yet, the study designs themselves could have predetermined the results, given that the project was conducted amid a remarkably intense election where people’s preferences were already likely to be strong. “Pretreatment” of this kind can increase the likelihood of finding “no effects” in experimental studies, and it can be difficult to parse whether the “null” effect is truly null or whether there was simply no room left, in the first place, for attitudes to move (85).

Further, publications from the 2020 project claimed that outside academics “were not financially compensated” (74). However, the team subsequently disclosed that “of the 17 members of the core academic team on the research project, one owned individual stocks, one had previously served as a paid consultant on another project, nine had received funding for research for other projects, and three had received fees for attending or organizing an event or serving as an outside expert at an event” (86). This fact should not be interpreted as criticism of academics who explore new ways of accessing proprietary data. It does raise COI concerns, however. Financial entanglements that would be unacceptable for other types of research continue to plague the study of information ecologies. For example, a study conducted by academics and tobacco companies that analyzed data provided by industry would likely be unpublishable, as would a study initiated by Purdue Pharma to assess the addictive qualities of opioids (especially if access to users, medications, and control groups were at least partially shaped by industry, similar to what happened with the Meta studies) (87).

In (science) communication research, the current stranglehold that platforms and other industry players have over data has created power asymmetries between industry and outside researchers that currently makes enforcement of ethical and COI guardrails by the scholarly community very difficult. Rapidly emerging technologies controlled by a few industry players are likely to exacerbate this problem: “Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors” (88).

All of this raises a third concern about industry systematically shaping the direction of academic discourse, which poses a significant challenge as scholars attempt to generate consensus on the potentially detrimental effects of changing information ecologies. For instance, industry players like Meta make significant investments into long-term research streams that claim to contradict existing scholarship on phenomena like filter bubbles, in order to absolve their platforms of responsibility for adverse effects on society or individuals (74, 8183, 89, 90). For the most recent set of 2020 election papers, aforementioned attempts by Meta to establish a “consensus” narrative led to pushback from both outside researchers and Science magazine, which warned the company “that it would publicly dispute an assertion that the published studies should be read as largely exonerating Meta of a contributing role in societal divisions” (91).

Challenge 4: Ethical Challenges in Research on Current Information Ecologies.

Even if we could overcome the above problems and gain access to the kinds of data we need to generate more rigorous and reliable systems-level insights that could theoretically approach a reasonable replicability expectation, the research community would still face ethical challenges. Although there may be more, we touch briefly in this section on a series of concerns related to consent, privacy, a failure to minimize harm to research participants, as well as COIs.

Studies that use APIs to collect digital trace data are complicated by the reality that participants have not consented to being studied (76). While “users may have formally agreed to their data being used as stated in the platform’s [terms of service], they may not actually be aware of being observed by researchers, and have not given informed consent to participate in the specific research project at hand” (39). One way to address this consent issue is data donation. Users who participate in a given study not only consent to it but also actively work on sharing their data by, for instance, downloading it from the platform of interest and uploading it to a study-related website. Although this approach addresses consent issues, it raises new concerns about self-selection bias for users who do not feel tech-savvy enough to download their data and are less likely to participate (39).

Beyond consent concerns, all studies using digital trace data have an obligation to protect users’ privacy in their handling and potential sharing of the data (39, 76). This is perhaps especially true in cases where the exact wording of user-generated content (such as social media posts) will be shared and where pseudonymizing the user’s name is insufficient to ensure anonymity (because a user could easily be identified via a simple Google search of a post’s wording) (76).

Finally, platform-controlled experiments or experiments that result from academic-platform collaborations can fail to minimize harm to the unwitting users of these platforms, violating the ethical research principle of beneficence, among other concerns. For example, Facebook manipulated the emotional tone of posts for some of its users (92), while LinkedIn manipulated the amount of posts coming from weak versus strong ties (93). The Facebook study examined whether exposure to fewer positive posts would impact users’ expressions of negative emotions, and it raised a variety of ethical concerns related to Facebook users’ agency and exploitation, to the increased risk of emotional harm experienced by certain study participants (94), and to journal editors’ decision to publish the study results in the first place (95). Similarly, in the LinkedIn study, individuals who saw fewer posts from weak ties got fewer jobs, again raising questions about minimization of harm or, contrastingly, about unequal access to benefits.

Challenge 5: The Urgent Need for Productive Paths Forward.

Most areas of science are in the middle of a tectonic transformation, triggered or at least accelerated by the arrival of big and deep data, along with the rapid proliferation of computational tools and techniques (96). This transformation has been particularly noticeable in fields like communication science, information science, political science, and other areas of the social sciences that deal with how information gets produced, distributed, and used in societies (97).

As we showed earlier, the availability of new data sources for studying science communication has emerged concurrently with concerns about science-related dis- and misinformation spreading online (98). Addressing these challenges requires sophisticated modeling of how falsehoods spread in our societies using the best available data (99). What kinds of data do we have available? How can we mine them meaningfully and in adherence with the best practices and norms of science, including replicability, reproducibility, and ethical research conduct?

The answers are not as clear as they might seem, and there are no magic bullet solutions. One extreme idea is to put computational work on hold until we can map a clearer pathway forward. This is fundamentally at odds, however, with the urgency of the problems at hand. Leaving academic researchers without access to data about how citizens navigate current information ecologies is unwise in a world in which many Americans at least somewhat regularly get their news from social media. Yet, calls to force platforms to make data and algorithms public are not without caveats, either. Among the downsides of this option are economic disincentives to innovate, the loss of privacy for users when datasets go public, and malevolent actors intentionally “gaming” public algorithms during political campaigns (100).

Instead, we argue for a more collaborative approach. One could imagine, for instance, the creation of “gated” panels of users on various platforms—say, a panel of 500,000 Facebook users—that would be available for academic research. Users in these “research panels” would not be included in proprietary, commercial A/B testing and would, therefore, be unaffected by algorithms that could confound research. Participants could be recruited following the Common Rule and debriefed when appropriate. They could also rotate in and out of the panel to minimize fatigue or other confounds.

Of course, research panels would not solve all the problems outlined in this essay. Proprietary algorithms and commercial A/B testing would continue to confound real-world interactions such that research results from the panels would lack some validity and generalizability. Still, the panel idea is representative of the type of “good-faith” compromise we need to discuss across industry and academia, if we hope to resolve issues of data control, COIs, and transparency that currently hinder the emergence of transparent research and robust scientific consensus. Whatever compromises may be reached, it is likely that independent advisory boards would be a prerequisite for buy-in among industry and academic partners who have overlapping interests in harnessing the power of emerging communication platforms within regulatory frameworks that, ideally, optimize commercial and democratic outcomes. Such advisory boards would need to include independently and equitably appointed members representing a wide range of stakeholder groups.

Finally, a more reliable understanding of changing information ecologies will not automatically improve scientists’ or scientific institutions’ public communication and engagement practices, given well-documented structural and cultural barriers (101103). Consequently, we argue that the National Academies of Sciences, Engineering, and Medicine (NASEM) should convene dialogues across academia, industry, and other public stakeholders to generate and execute solutions to the challenges we have described, some of which require reflexivity within science itself. Within this effort, NASEM should continue to state with as much frequency and authority as possible that the elephants in the room with respect to industry-academia asymmetries must be addressed. Settling for circumstances in which researchers are unable to provide the scientific community and its institutions a reliable evidence base to drive effective science communication strategies in modern information environments is simply not an option.

Acknowledgments

Author contributions

N.M.K., I.F., and D.A.S. performed research; and wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

D.A.S. is an organizer of this Special Feature.

This article is a PNAS Direct Submission. J.N.D. is a guest editor invited by the Editorial Board.

Data, Materials, and Software Availability

There are no data underlying this work.

References

  • 1.Brossard D., Scheufele D. A., Science, new media, and the public. Science 33, 40–41 (2013). [DOI] [PubMed] [Google Scholar]
  • 2.Leshner A. I., Public engagement with science. Science 299, 977–977 (2003). [DOI] [PubMed] [Google Scholar]
  • 3.Brossard D., Scheufele D. A., The chronic growing pains of communicating science online. Science 375, 613–614 (2022). [DOI] [PubMed] [Google Scholar]
  • 4.Kahan D. M., Scheufele D. A., Jamieson K. H., “Introduction” in The Oxford Handbook of the Science of Science Communication, Jamieson K. H., Kahan D. M., Scheufele D. A., Eds. (Oxford University Press, 2017). vol. 1. [Google Scholar]
  • 5.Dimmick J., Feaster J. C., Hoplamazian G. J., News in the interstices: The niches of mobile media in space and time. New Media Soc. 13, 23–39 (2011). [Google Scholar]
  • 6.Bik H. M., Goldstein M. C., An introduction to social media for scientists. PLoS Biol. 11, e1001535 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Cacciatore M. A., Scheufele D. A., Corley E. A., Another (methodological) look at knowledge gaps and the Internet’s potential for closing them. Public Underst. Sci. 23, 376–394 (2014). [DOI] [PubMed] [Google Scholar]
  • 8.Anderson J. T. L., Howell E. L., Xenos M. A., Scheufele D. A., Brossard D., Learning without seeking? Incidental exposure to science news on social media & knowledge of gene editing. J. Sci. Commun. 20, A01 (2021). [Google Scholar]
  • 9.Gierth L., Bromme R., Attacking science on social media: How user comments affect perceived trustworthiness and credibility. Public Underst. Sci. 29, 230–247 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Scheufele D. A., Krause N. M., Freiling I., Brossard D., What we know about effective public engagement on CRISPR and beyond. Proc. Natl. Acad. Sci. U.S.A. 118, e2004835117 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Scheufele D. A., Krause N. M., Freiling I., Brossard D., How not to lose the COVID-19 communication war. Issues Sci. Technol. (2020), https://issues.org/covid-19-communication-war/.
  • 12.Scheufele D. A., Science communication as political communication. Proc. Natl. Acad. Sci. U.S.A. 111, 13585–13592 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chinn S., Hiaeshutter-Rice D., Chen K., How science influencers polarize supportive and skeptical communities around politicized science: A cross-platform and over-time comparison. Polit. Commun. 41, 627–648 (2023), 10.1080/10584609.2023.2201174. [DOI] [Google Scholar]
  • 14.Fahy D., Lewenstein B., “Scientists in popular culture: The making of celebrities” in Routledge Handbook of Public Communication of Science and Technology, Bucchi M., Trench B., Eds. (Routledge, 2021), pp. 33–52. [Google Scholar]
  • 15.Altay S., Berriche M., Acerbi A., Misinformation on misinformation: Conceptual and methodological challenges. Soc. Media Soc. 9, 20563051221150412 (2023). [Google Scholar]
  • 16.Srivastava J., Media multitasking performance: Role of message relevance and formatting cues in online environments. Comput. Hum. Behav. 29, 888–895 (2013). [Google Scholar]
  • 17.Newman N., Fletcher R., Eddy K., Robertson C. T., Nielsen R. K., Reuters Institute Digital News Report 2023 (Reuters Institute for the Study of Journalism, 2023). [Google Scholar]
  • 18.Druckman J. N., Correcting misperceptions of the other political party does not robustly reduce support for undemocratic practices or partisan violence. Proc. Natl. Acad. Sci. U.S.A. 120, e2308938120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Chadwick A., The Hybrid Media System: Politics and Power (Oxford University Press, 2017). [Google Scholar]
  • 20.Scheufele D. A., Understanding (perceptions of) emerging information ecologies. J. Commun. Monogr. 24, 141–145 (2022). [Google Scholar]
  • 21.Scheufele D. A., Krause N. M., Freiling I., Misinformed about the “infodemic?” Science’s ongoing struggle with misinformation. J. Appl. Res. Mem. Cogn. 10, 522–526 (2021). [Google Scholar]
  • 22.Scheufele D. A., Scientific misinformation: A perfect storm, missteps, and moving forward. Cell 184, 1402–1406 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Del Vicario M., et al. , The spreading of misinformation online. Proc. Natl. Acad. Sci. U.S.A. 113, 554–559 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Brady W. J., McLoughlin K., Doan T. N., Crockett M. J., How social learning amplifies moral outrage expression in online social networks. Sci. Adv. 7, eabe5641 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Crockett M. J., Moral outrage in the digital age. Nat. Hum. Behav. 1, 769–771 (2017). [DOI] [PubMed] [Google Scholar]
  • 26.Brady W. J., Wills J. A., Jost J. T., Tucker J. A., Van Bavel J. J., Emotion shapes the diffusion of moralized content in social networks. Proc. Natl. Acad. Sci. U.S.A. 114, 7313–7318 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Valenzuela S., Piña M., Ramírez J., Behavioral effects of framing on social media users: How conflict, economic, human interest, and morality frames drive news sharing. J. Commun. 67, 803–826 (2017). [Google Scholar]
  • 28.Pew Research Center, Partisan Conflict and Congressional Outreach (Pew Research Center, 2017). [Google Scholar]
  • 29.Kätsyri J., Kinnunen T., Kusumoto K., Oittinen P., Ravaja N., Negativity bias in media multitasking: The effects of negative social media messages on attention to television news broadcasts. PLoS ONE 11, e0153712 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Ceylan G., Anderson I. A., Wood W., Sharing of misinformation is habitual, not just lazy or biased. Proc. Natl. Acad. Sci. U.S.A. 120, e2216614120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Haidt J., Babel After, Why the past 10 years of American life have been uniquely stupid. The Atlantic 329, 54–66 (2022). [Google Scholar]
  • 32.Haidt J., Rose-Stockwell T., The dark psychology of social networks. Why it feels like everything is going haywire. The Atlantic 324, 56–60 (2019). [Google Scholar]
  • 33.Milkman K. L., Berger J., The science of sharing and the sharing of science. Proc. Natl. Acad. Sci. U.S.A. 111, 13642–13649 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Yeo S. K., McKasy M., Emotion and humor as misinformation antidotes. Proc. Natl. Acad. Sci. U.S.A. 118, e2002484118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kubin E., Puryear C., Schein C., Gray K., Personal experiences bridge moral and political divides better than facts. Proc. Natl. Acad. Sci. U.S.A. 118, e2008389118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Van Bavel J. J., Reinero D. A., Spring V., Harris E. A., Duke A., Speaking my truth: Why personal experiences can bridge divides but mislead. Proc. Natl. Acad. Sci. U.S.A. 118, e2100280118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Russell C., Covering controversial science: Improving reporting on science and public policy (Joan Shorenstein Center on the Press, Politics and Public Policy Working Paper Series, 2006).
  • 38.Smith N., Graham T., Mapping the anti-vaccination movement on Facebook. Inf. Commun. Soc. 22, 1310–1327 (2017). [Google Scholar]
  • 39.Breuer J., Bishop L., Kinder-Kurlanda K., The practical and ethical challenges in acquiring and sharing digital trace data: Negotiating public–private partnerships. New Media Soc. 22, 2058–2080 (2020). [Google Scholar]
  • 40.Krause N. M., Scheufele D. A., Freiling I., Brossard D., The trust fallacy: Scientists’ search for public pathologies is unhealthy, unhelpful, and ultimately unscientific. Am. Sci. 109, 226–231 (2021). [Google Scholar]
  • 41.Nauroth P., Gollwitzer M., Kozuchowski H., Bender J., Rothmund T., The effects of social identity threat and social identity affirmation on laypersons’ perception of scientists. Public Underst. Sci. 26, 754–770 (2017). [DOI] [PubMed] [Google Scholar]
  • 42.Pielke R. A. Jr., The Honest Broker: Making Sense of Science in Policy and Politics (Cambridge University Press, New York, NY, 2007). [Google Scholar]
  • 43.Sarewitz D., Science must be seen to bridge the political divide. Nature 493, 7 (2013). [DOI] [PubMed] [Google Scholar]
  • 44.Fahy D., The New Celebrity Scientists: Out of the Lab and into the Limelight (Rowman & Littlefield, 2015). [Google Scholar]
  • 45.Simpson A., Rios K., Is science for atheists? Perceived threat to religious cultural authority explains U.S. Christians’ distrust in secularized science. Public Underst. Sci. 28, 740–758 (2019). [DOI] [PubMed] [Google Scholar]
  • 46.Unsworth A., Voas D., The Dawkins effect? Celebrity scientists, (non)religious publics and changed attitudes to evolution. Public Underst. Sci. 30, 434–454 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Funtowicz S. O., Ravetz J. R., “Three types of risk assessment and the emergence of post-normal science” in Social Theories of Risk, Krimsky S., Golding D., Eds. (Praeger, Westport, CT, 1992), pp. 251–274. [Google Scholar]
  • 48.Jasanoff S., Hurlbut J. B., Saha K., CRISPR democracy: Gene editing and the need for inclusive deliberation. Issues Sci. Technol. 32, 37 (2015). [Google Scholar]
  • 49.Evans J. H., Can the public express their views or say no through public engagement? Environ. Commun. 14, 881–885 (2020). [Google Scholar]
  • 50.Scheufele D. A., Lewenstein B. V., The public and nanotechnology: How citizens make sense of emerging technologies. J. Nanopart. Res. 7, 659–667 (2005). [Google Scholar]
  • 51.Marcus A., Oransky I., The science of this pandemic is moving at dangerous speeds. Wired, 2020. https://www.wired.com/story/the-science-of-this-pandemic-is-moving-at-dangerous-speeds/. Accessed 4 January 2024.
  • 52.Rabin R. C., Gabler E., Two huge Covid-19 studies are retracted after scientists sound alarms. NYTimes, 2020. https://www.nytimes.com/2020/06/04/health/coronavirus-hydroxychloroquine.html. Accessed 6 January 2024.
  • 53.Popper K., Logik der Forschung [The Logic of Scientific Discovery] (Mohr, Tübingen, Germany, 10th ed., 1994). [Google Scholar]
  • 54.Krause N. M., Freiling I., Scheufele D. A., The infodemic ‘infodemic:’ Toward a more nuanced understanding of truth-claims and the need for (not) combatting misinformation. Ann. Am. Acad. Pol. Soc. Sci. 700, 112–123 (2022). [Google Scholar]
  • 55.DeLuca S., Papageorge N., Kalish E., The Unequal Cost of Social Distancing (Johns Hopkins Coronavirus Resource Center, 2020). [Google Scholar]
  • 56.West J. D., Bergstrom C. T., Misinformation in and about science. Proc. Natl. Acad. Sci. U.S.A. 118, e1912444117 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Settle J. E., Frenemies: How Social Media Polarizes America (Cambridge University Press, 2018). [Google Scholar]
  • 58.The Editors, Why Nature supports Joe Biden for US president. Nature 586, 335 (2020). [DOI] [PubMed] [Google Scholar]
  • 59.The Editors, Will cancer care be a winner in the US election? Lancet Oncol. 21, 1253 (2020). [DOI] [PubMed] [Google Scholar]
  • 60.Scientific American, Scientific American endorses Joe Biden. Sci. Am. (2020). [Google Scholar]
  • 61.Zhang F. J., Political endorsement by Nature and trust in scientific expertise during COVID-19. Nat. Hum. Behav. 7, 696–706 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Tokita C. K., Guess A. M., Tarnita C. E., Polarized information ecosystems can reorganize social networks via information cascades. Proc. Natl. Acad. Sci. U.S.A. 118, e2102147118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Mosleh M., Martel C., Eckles D., Rand D. G., Shared partisanship dramatically increases social tie formation in a Twitter field experiment. Proc. Natl. Acad. Sci. U.S.A. 118, e2022761118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Donsbach W., Medienwirkung Trotz Selektion: Einflussfaktoren auf die Zuwendung zu Zeitungsinhalten [Media Effects Despite Selection: Influences on Attention to Newspaper Content] (Böhlau, Köln, Germany, 1991). [Google Scholar]
  • 65.Festinger L., A Theory of Cognitive Dissonance (Standford University Press, Stanford, CA, 1957). [Google Scholar]
  • 66.Cinelli M., De Francisci Morales G., Galeazzi A., Quattrociocchi W., Starnini M., The echo chamber effect on social media. Proc. Natl. Acad. Sci. U.S.A. 118, e2023301118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Pariser E., The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think (Penguin, New York, NY, 2011). [Google Scholar]
  • 68.How disinformation works—And how to counter it. The Economist, 2024. https://www.economist.com/leaders/2024/05/02/how-disinformation-works-and-how-to-counter-it. Accessed 5 June 2024.
  • 69.Social Science Research Council, To Secure Knowledge: Social Science Partnerships for the Common Good (Social Science Research Council, Brookly, NY, 2018). [Google Scholar]
  • 70.Social Science Research Council, Statement from Social Science Research Council President Alondra Nelson on the Social Media and Democracy Research Grants Program (2019).
  • 71.Silverman C., Exclusive: Funders have given Facebook a deadline to share data with researchers or they’re pulling out (BuzzFeed News, 2019). [Google Scholar]
  • 72.U.S. Senate Committee on the Judiciary, Hearing: Big Tech and the online child sexual exploitation crisis (2024).
  • 73.Jervis R., “The drunkard’s search” in Explorations in Political Psychology, Iyengar S., McGuire W. J., Eds. (Duke University Press, Durham, NC, 1993), pp. 338–360. [Google Scholar]
  • 74.Nyhan B., et al. , Like-minded sources on Facebook are prevalent but not polarizing. Nature 620, 137–144 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Mackintosh J., Diversity was supposed to make us rich. Not so much. The Wall Street Journal, 2024. https://www.wsj.com/finance/investing/diversity-was-supposed-to-make-us-rich-not-so-much-39da6a23. Accessed 7 July 2024.
  • 76.Freiling I., Krause N. M., Scheufele D. A., Chen K., The science of open (communication) science: Toward an evidence-driven understanding of quality criteria in communication research. J. Commun. 71, 686–714 (2021). [Google Scholar]
  • 77.Matsa K. E., More Americans Are Getting News on TikTok, Bucking the Trend Seen on Most Other Social Media Sites (Pew Research Center, 2023). [Google Scholar]
  • 78.Fifield A., TikTok’s owner is helping China’s campaign of repression in Xinjiang, report finds. The Washington Post, 2019. https://www.washingtonpost.com/world/tiktoks-owner-is-helping-chinas-campaign-of-repression-in-xinjiang-report-finds/2019/11/28/98e8d9e4-119f-11ea-bf62-eadd5d11f559_story.html. Accessed 4 January 2024.
  • 79.Hawkins I., Chinn S., Populist views of science: How social media, political affiliation, and Alt-Right support affect scientific attitudes in the United States. Inf. Commun. Soc. 27, 520–537 (2024). 10.1080/1369118X.2023.2219724. [DOI] [Google Scholar]
  • 80.Walter D., Ophir Y., Lokmanoglu A. D., Pruden M. L., Vaccine discourse in white nationalist online communication: A mixed-methods computational approach. Soc. Sci. Med. 298, 114859 (2022). [DOI] [PubMed] [Google Scholar]
  • 81.Guess A. M., et al. , Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381, 404–408 (2023). [DOI] [PubMed] [Google Scholar]
  • 82.Guess A. M., et al. , How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381, 398–404 (2023). [DOI] [PubMed] [Google Scholar]
  • 83.González-Bailón S., et al. , Asymmetric ideological segregation in exposure to political news on Facebook. Science 381, 392–398 (2023). [DOI] [PubMed] [Google Scholar]
  • 84.Clegg N., Groundbreaking studies could help answer the thorniest questions about social media and democracy. Meta, 2023. https://about.fb.com/news/2023/07/research-social-media-impact-elections/. Accessed 4 January 2024.
  • 85.Druckman J. N., Leeper T. J., Learning more from political communication experiments: Pretreatment and its effects. Am. J. Polit. Sci. 56, 875–896 (2012). [Google Scholar]
  • 86.Election Research Project, US 2020 Facebook & Instagram election study. Frequently Asked Questions (FAQ) (Medium, 2023).
  • 87.Abdalla M., Abdalla M., “The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (Association for Computing Machinery, Virtual Event, USA, 2021), pp. 287–297. [Google Scholar]
  • 88.Menn J., Nix N., Big Tech funds the very people who are supposed to hold it accountable. The Washington Post, 2023. https://www.washingtonpost.com/technology/2023/12/06/academic-research-meta-google-university-influence/. Accessed 4 January 2024.
  • 89.Bakshy E., Messing S., Adamic L. A., Exposure to ideologically diverse news and opinion on Facebook. Science 348, 1130–1132 (2015). [DOI] [PubMed] [Google Scholar]
  • 90.Messing S., Westwood S. J., Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Commun. Res. 41, 1042–1063 (2014). [Google Scholar]
  • 91.Horwitz J., Does Facebook polarize users? Meta disagrees with partners over research conclusions. Wall Street Journal, 2023. https://www.wsj.com/articles/does-facebook-polarize-users-meta-disagrees-with-partners-over-research-conclusions-24fde67a. Accessed 8 January 2024.
  • 92.Kramer A. D. I., Guillory J. E., Hancock J. T., Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. U.S.A. 111, 8788–8790 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Rajkumar K., Saint-Jacques G., Bojinov I., Brynjolfsson E., Aral S., A causal test of the strength of weak ties. Science 377, 1304–1310 (2022). [DOI] [PubMed] [Google Scholar]
  • 94.Selinger E., Hartzog W., Facebook’s emotional contagion study and the ethical problem of co-opted identity in mediated environments where users lack control. Res. Ethics 12, 35–43 (2016). [Google Scholar]
  • 95.Verma I. M., Editorial Expression of Concern: Experimental evidence of massivescale emotional contagion through social networks. Proc. Natl. Acad. Sci. U.S.A. 111, 10779 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.National Academies of Sciences, Engineering, & Medicine, Reproducibility and Replicability in Science (The National Academies Press, Washington, DC, 2019), p. 218, 10.17226/25303. [DOI] [PubMed] [Google Scholar]
  • 97.Lazer D., et al. , Computational social science. Science 323, 721–723 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Scheufele D. A., Krause N. M., Science audiences, misinformation, and fake news. Proc. Natl. Acad. Sci. U.S.A. 116, 7662–7669 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Lazer D., et al. , Combating Fake News: An Agenda for Research and Action (Harvard Kennedy School, Shorenstein Center on Media, Politics and Public Policy, 2017), vol. 2. [Google Scholar]
  • 100.de Laat P. B., Algorithmic decision-making based on machine learning from big data: Can transparency restore accountability? Philos. Technol. 31, 525–541 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Bao L., et al. , How institutional factors at US land-grant universities impact scientists’ public scholarship. Public Underst. Sci. 32, 124–142 (2023). [DOI] [PubMed] [Google Scholar]
  • 102.Calice M. N., et al. , Public engagement: Faculty lived experiences and perspectives underscore barriers and a changing culture in academia. PLoS ONE 17, e0269949 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Calice M. N., et al. , A triangulated approach for understanding scientists’ perceptions of public engagement with science. Public Underst. Sci. 32, 389–406 (2023). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

There are no data underlying this work.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES