Skip to main content
PLOS One logoLink to PLOS One
. 2024 Feb 16;19(2):e0297808. doi: 10.1371/journal.pone.0297808

Beyond partisan filters: Can underreported news reduce issue polarization?

Curtis Bram 1,*
Editor: Yongjun Zhang2
PMCID: PMC10871475  PMID: 38363749

Abstract

While many news outlets aim for impartiality, 67% of Americans perceive their news sources as partisan, often presenting only one side of the story. This paper tests whether exposing individuals to news stories their political adversaries focus on can mitigate political polarization. In an experiment involving a real-world political newsletter—sent to participants who had opted to receive news that uncovers media biases—exposure to a specific story about refugee policy led respondents to reassess their positions. This reevaluation changed their stances on the issue and reduced the ideological distinctions they made between Democrats and Republicans. These findings underscore the need for future studies to untangle the specific circumstances where cross-partisan exposure can alter political attitudes.

Introduction

The motto of Fox News reads “Fair and Balanced”, and that of The New York Times is “All the News That’s Fit to Print.” Journalists and news sources often claim to be neutral. However, the public is largely skeptical of these claims—a 2020 Pew Research poll finds that a striking 67% of Americans think their preferred news sources present facts with a partisan slant. These survey responses reveal a widespread awareness of partisan biases in news coverage. In addition to this awareness, researchers find that partisan media can contribute to polarization [14]. If people suspect that their preferred sources are biased, then can revealing coverage biases reduce issue polarization?

When a Republican only watches Fox News, or a Democrat only watches MSNBC, that person receives their news from like-minded sources (AllSides rated those sources as right- and left-leaning, respectively, in 2023). This person is then in a news “echo chamber”, which is where people primarily or exclusively consume content from politically aligned sources. The potential problem is obvious; if right-leaning sources highlight all of the mistakes the left is making, and left-leaning sources highlight all of the mistakes the right is making, then both Republicans and Democrats will have completely divergent (and possibly wrong) views of each other. In fact, recent research shows that focusing only on the issues that politicians disagree on causes people to misperceive the viewpoints of Democrats and Republicans, promoting polarization [5]. This finding suggests that prompting news consumers to broaden their focus—meaning to break out of their news echo chambers—can reduce polarization.

There has been research on echo chambers on social networks [6]. Experimental tests of potential echo chamber effects often use simulated information environments, so it is difficult to know whether these findings generalize to a real-world environment [7, p. 138]. To rectify that issue, a component of the 2020 Election Research Project reduced real-world exposure to like-minded sources on Facebook and found that doing so did not affect polarization or other attitudes [7]. That line of research has focused on social media, probably because social networks give consumers tools to easily select their own information environments.

In addition to more recent work focused on social networks, studies dating back to the 1960s emphasize the importance of selective exposure to political information, when individuals opt to consume news that reinforces their preexisting beliefs [8]. This selective exposure goes beyond social media, contributes to polarization [9, 10], and intensifies before elections [11, 12]. People often select information that fits their worldview, thereby sorting into polarized groups [13, 14].

Because the tendency to consume information from like-minded sources extends beyond social media, it is important to explore how exposing news consumers to information from stories across the partisan divide affects polarization. In this paper, I unobtrusively randomized the content of a real-world political newsletter. The experiment was carried out with a highly selected group of participants who had registered to receive information about stories ignored by left and right-leaning sources. By manipulating the content of these newsletters, I aim to assess the impact of exposure to specific stories on both political issue polarization and individual issue stances.

The results indicate that one of the four stories tested encouraged a shift in perspective. This suggests that, at least in the context of short-term exposure, issue polarization is not solely attributable to a lack of awareness about the topics covered by opposing political sides. These results align with studies of online partisan media that report that exposure to opposing media produces minimal effects [15] or that the effects of partisan media may be overstated [16]. Importantly, recent studies have found mixed evidence on the existence of online echo chambers [17]. Furthermore, politics is not a central part of most people’s lives, and the debate continues over the societal effects of changes in the digital media landscape [18, p.,16]. However, these findings are also consistent with recent research demonstrating that transitioning partisan media consumers to a source with opposing bias can lead to significant shifts in attitudes about some issues [1, p. 9].

Given the overall results, a key takeaway from this work is the variation in the circumstances under which examining partisan selective exposure can impact attitudes. Since at least one story may have caused people to change their minds, future research should explore the conditions that determine the extent and manner in which polarized media influences political discourse and outcomes. By identifying and understanding these nuances, we may be better equipped to address the challenges posed by partisan media in contemporary society.

Experimental design

To examine how information about media coverage biases affects people’s attitudes toward political issues, I used real-world treatments, delivered through an emailed newsletter to which participants had already subscribed. These treatments were incorporated as part of Ground News’s weekly “Blindspot Report,” showcasing stories from the preceding week that received minimal attention from one political side or the other. Stories overlooked by one political side are included in the newsletter, which was distributed to approximately 115,000 subscribers on Tuesday evenings at the time the experiment was conducted. All Blindspot Report readers had either self-subscribed or provided their email when signing up to use Ground News’s mobile application. This means that all participants in this study were Ground News subscribers. They had willingly elected to receive a newsletter that focuses on illuminating overlooked stories from various partisan viewpoints.

Although there is currently no way to definitively compare Ground News’s subscribers to the population, the company’s approach to news coverage suggests that its subscribers possess an intrinsic interest in exploring narratives beyond conventional partisan lines. Consequently, they could be more receptive to persuasion and might exhibit greater flexibility in their attitudes and beliefs than the general populace. Furthermore, their subscription to a service that reveals overlooked stories indicates a level of engagement with broader political news that may not be as prevalent among the general public. This distinguishing characteristic of the sample—their openness to cross-cutting information—may influence the study results and shape how these respondents react to different types of stories they receive. This sample may be a particularly engaging group for exploring questions of persuasion and attitude change in the political sphere. At the same time, the uniqueness of the sample means that it is not possible to generalize the results of this experiment to the population.

At the time the experiment was run, Ground News processed more than 50,000 news articles each day, utilizing proprietary natural language processing algorithms to cluster articles from numerous news outlets into unified news stories to reveal coverage biases. The company rates bias at the source level (for instance, Fox News is classified as right-leaning and the New York Times as left-leaning). To develop these bias ratings, Ground News averages ratings from three independent, nonpartisan media monitoring organizations (AllSides, AdFontes Media, and Media Bias Fact Check). The company takes that average rating and maps it onto a seven-point ideological spectrum, with sources ranging from strongly left-leaning to strongly right-leaning.

To run the experiment, I selected four breaking stories from July 14th, 2021, to July 18th, 2021, that received limited coverage from either conservative or liberal-leaning media. These issues received most of their attention from partisan sources. To select these stories, I deviated from Ground News’s usual approach to story selection. Since the stories had to be chosen just a day before the first part of the survey was administered, I based the selection on the political importance of the issues at the time the experiment was conducted. Selecting stories in this way may have affected the experimental results. This approach to story selection could be systematized in future work, but was necessary due to the constraints imposed by utilizing the real-world newsletter in the experiment. For example, as demonstrated in Fig 1, while CNN covered the story about the Biden administration’s evaluation of the origins of Covid-19, most of the sources who chose to discuss the story leaned right. The explanation for this could be that left-leaning outlets saw the story as a policy reversal potentially harmful to President Biden, which may have led some partisan sources on the left to choose to ignore the story.

Fig 1. Example of a story that received differential coverage from liberal- and conservative-leaning sources, and which was also included as an experimental treatment.

Fig 1

Reprinted from the original under a CC BY license, with permission from Ground News, original copyright 2023. Please note that the original figure included a photo that was omitted in this paper for copyright reasons.

The stories overlooked by the left included the Biden administration’s decision to return refugees fleeing Cuba by sea (9% left-leaning coverage) and the Biden administration’s assessment that the virus causing the Covid-19 pandemic is equally likely to have originated from a lab leak as from direct contact with animals (13% liberal coverage). Right-leaning sources neglected Republican Governor Spencer Cox’s statement that vaccine misinformation is lethal (10% conservative coverage) and the leak of Kremlin papers revealing Putin’s plot to install Trump in the White House (25% coverage from the right). Fig 1 shows an example of a story from a randomly assigned newsletter that was predominantly ignored by left-leaning media. Due to the design of Ground News’s newsletter, treatment stories encapsulate information about the story itself and the partisan breakdown of coverage.

On Monday, July 19th, subscribers to the Blindspot Report were invited to take a baseline survey. This survey encompassed questions about all four issues; their ideology, their country of origin, and their partisanship if they hailed from the United States. All respondents read and agreed to an informed consent form before starting each survey. This study received approval from Duke University’s Institutional Review Board on July 8, 2021 (protocol number: 2021–0598) and did not involve any deception. Of the initial 2,372 respondents, 1,861 reported an email address corresponding to a Blindspot Report subscriber. This discrepancy likely arises because respondents were asked to self-report their email address for the purpose of contact in the recontact study, leading to some entering a different email address from that used to subscribe to Ground News’s content.

This experimental design aligns with previous work advocating that field experiments employ repeated measures designs with samples drawn from known populations [19]. Questions about the four issues remained consistent throughout both survey waves and were structured identically across the issues. The rationale behind this pre-post approach was to enhance precision, and previous work validates that these designs achieve this without biasing respondents [20]. There is little evidence that respondents discern the purpose of experimental studies and alter their behavior accordingly (commonly called demand effects) in political science and economics experiments [21, 22].

Both iterations of the newsletter were sent to all participants on July 20th, 2021. The following day, all respondents from the initial wave were invited by email to participate in a follow-up survey that was substantially identical to the first wave but incorporated additional demographic questions, such as age, sex, race, and education. The final sample comprised the 1,234 individuals who received treatment newsletters and completed both surveys. The average respondent was 44 years old and had a Bachelor’s degree. Among the respondents, 82% identified as white and 74% were men. Of the 921 US-based respondents, 222 (24%) identified themselves as Republicans, 383 (42%) as independents, and 316 (34%) as Democrats. Respondents were evenly distributed across conditions.

Results

This research examined four significant news stories: Cuban refugees arriving by sea, vaccine misinformation, the origins of the Covid-19 pandemic, and Russian election interference. All dependent variables were measured before and after exposure to the news stories, and the change in responses serves as the dependent variable for all analyses. I focus on four attitudes toward each issue and these represent all the measured outcomes in the experiment. These are the personal importance individuals place on the issue, their stance on the issue, their beliefs about the extent of polarization among politicians on the issue, and their approval of Democratic and Republican politicians’ handling of the issue. I coded changes in issue positions so that a positive change means that the respondent moves his or her position toward the perspective implied by the story. For example, a positive change in position toward vaccine information means that the respondent wants the government to exert more effort to combat vaccine misinformation. Of course, describing these changes as positive or negative in this study does not imply anything about the normative benefits or costs of these positions.

The Online S1 Appendix (Section 6.1) reports the experimental results for each possible attitude-issue combination. I find little evidence of uniform change in response to these four stories, with no p-value less than a Bonferroni corrected significance threshold of 0.003125—the correction is used to compensate for the fact that there are 16 total attitude-issue combinations. But these average effects cannot reveal much about how people truly responded to the stories. The reason is that the Blindspot report highlights stories that partisan media consumers would otherwise miss. Therefore, what matters is whether the stories fall into someone’s Blindspot and are thus likely to challenge people’s perspectives. For example, Republicans and Democrats may change their attitudes in completely different ways when they see a story indicating that President Biden is less welcoming toward Cuban refugees than they likely expected.

To learn about how people respond when they receive a story that challenges their own ideological priors, I pool the issues together and focus on changes in the four measured attitudes, aggregating the four issues so that there are then four total tests. I explore the influence of the type of story received on the change in response for each question type. This analysis takes into account the clustering within respondents using robust standard errors that account for potential heteroskedasticity.

I categorize respondents based on the type of story they received. First, the baseline category (not receiving a story) corresponds to the over-time change in opinion, because I have pre-post data on respondents’ attitudes for both the stories that they did and did not receive. This categorization is important because attitudes towards each issue may have changed over the days the study was conducted, perhaps because of incidental exposure to stories from other sources. Next, American partisan respondents can receive a “Blindspot” story, such as when Democrats read about Biden’s policy toward Cuban refugees or the origins of the Covid-19 pandemic. On the other side, a Blindspot for Republicans is when they read about vaccine misinformation or Russia’s election interference. Partisan respondents can also receive an “in-partisan” story. Those stories are the reverse of the Blindspot story—Democrats read about vaccine misinformation and Russia while Republicans read about Cuban refugees and the origins of Covid-19. In-partisan stories can be considered analogous to traditional partisan media. When people receive these stories, they are often reinforcing their original beliefs. Finally, American independents and those located outside the U.S. can receive stories.

Table 1 shows the results of this analysis considering the type of people and the type of stories they receive. Again, the data are aggregated into long-form to reflect the fact that everyone received four stories and answered questions about all four attitudes. All respondents fit into one of the included categories, reflecting their receipt of a story and their own position relative to that story. In examining the table, several findings deserve particular attention. When no story is presented, which serves as the change-over-time baseline, there is a small but statistically significant increase in issue polarization (0.08, p < 0.05) and approval of politicians (0.01, p < 0.05). For Democrats who received a Blindspot story, issue polarization appears to decrease (−0.07, p < 0.05), and there are no appreciable shifts in other attitudes. Independents slightly adjust their issue positions (0.02, p < 0.05) when exposed to a story, suggesting some receptivity to change. Meanwhile, Republicans who encountered in-partisan stories also slightly but significantly adjusted their positions (−0.03, p < 0.05). The metrics on importance and approval mostly remain stable across different treatments. Across all models, the share of variance explained by the experimental treatments is very low. Overall, these data indicate that the type of story can exert targeted, although modest, impacts on select political attributes.

Table 1. Shows how respondents in each group respond to the aggregated issues.

Issue polarization Positions Importance Approval
Baseline (no story) 0.08* −0.00 −0.00 0.01*
[0.07;0.09] [−0.01;0.01] [−0.01;0.00] [0.00;0.02]
Democrat received Blindspot −0.07* −0.00 0.01 −0.00
[−0.10; −0.03] [−0.03;0.02] [−0.01;0.03] [−0.04;0.03]
Democrat received in-partisan 0.01 0.01 0.00 −0.02
[−0.02;0.04] [−0.01;0.03] [−0.01;0.01] [−0.05;0.02]
Independent received story −0.00 0.02* 0.01 −0.02
[−0.02;0.02] [0.00;0.04] [−0.01;0.02] [−0.04;0.01]
Non-U.S. respondent received −0.00 0.00 −0.00 0.01
[−0.03;0.02] [−0.02;0.02] [−0.02;0.01] [−0.01;0.04]
Republican received Blindspot 0.04 0.02 0.01 −0.02
[−0.00;0.09] [−0.01;0.05] [−0.02;0.04] [−0.06;0.03]
Republican received in-partisan −0.01 −0.03* 0.02 −0.01
[−0.05;0.03] [−0.06; −0.00] [−0.00;0.05] [−0.06;0.03]
R2 0.00 0.00 0.00 0.00
Adj. R2 0.00 0.00 −0.00 −0.00
Num. obs. 4921 4934 4935 4921
RMSE 0.28 0.22 0.17 0.29
N Clusters 1233 1234 1234 1233

*Null hypothesis value outside the 95% confidence interval.

These aggregated results indicate that Democrats may feel like there is less issue polarization when they receive a Blindspot story and that Republicans may adjust their position in response to partisan stories. Because those results are aggregated, they combine all stories. To understand exactly what stories drive these changes, it is important to disaggregate the analysis and examine each story separately. When looking at specific stories, I find that three of the four stories did not change people’s minds on these issues. However, the story about the Biden administration’s policy toward Cuban refugees significantly reduced issue polarization among Democrats and significantly increased Republicans’ favorability toward welcoming these refugees (Fig 2). These findings are substantively consistent when restricting the data for analysis to the 642 respondents for whom Ground News confirmed had opened the newsletter when the recontact survey request was sent: issue polarization among Democrats receiving Cuban story: (β = −0.16, S.E. = 0.04, P < 0.001); attitudes toward welcoming more Cuban refugees among Republicans receiving Cuban story: (β = 0.08, S.E. = 0.05, P < 0.07).

Fig 2. Shows the change in people’s reported issue polarization (A) and issue positions (B) for the Cuban refugee issue.

Fig 2

Categories are a baseline for the control group that did not receive a story (representing the change in people’s responses to outcome questions across waves). Then, indicators for each possible group are included. Error bars are 95% confidence intervals, and the red line indicates the mean of the baseline category; deviations from this mean signify statistically significant results.

Finally, beyond looking at specific stories, one additional benefit of this design was that 308 people completed the initial survey and follow-up, but had provided an email address that did not match an existing Blindspot Report subscriber. These individuals received a newsletter that did not feature any of the stories included in the experiment. Those respondents had been assigned to receive one of the two randomized newsletters, but because their emails did not match that of a current subscriber, they were sent a newsletter with no issues related to the survey. As expected, the random assignment among the partisans in this group, which did not receive treatment, did not significantly affect attitudes.

Discussion

Despite the seemingly high political stakes surrounding Russian election interference, the lab-leak hypothesis, and vaccine misinformation, information on these issues did not appear to sway opinions in this study. However, both Democrats and Republicans reacted to a story about refugee policy. This story differs from the other three in that it describes an unexpected outcome. During the 2020 campaign, the then-candidate Biden had pledged to allow more refugees to settle in the United States [23]. The administration’s apparent decision to impose greater restrictions on these refugees was probably surprising to partisans from both sides. As a result, Democrats seem to report lower levels of observed issue polarization, perceiving their party’s position as less distinct from Republicans’. Conversely, Republicans seem to respond by favoring increased openness to these refugees.

Future work will expand the scope of this research beyond a single week and beyond the four issues studied in this experiment. The manipulation studied in this trial was only done once, and one can imagine different effects were people to consistently receive information about their Blindspots on one or many issues over time. However, even such an experiment would be limited by the fact that most people spend little of their time online engaging with political news; recent work finds that news exposure takes only about 3% of people’s time online [24]. That said, one key aspect of this study’s design lies in its execution among individuals who one would expect to be most receptive to altering their perspectives. All participants in this research had demonstrated their willingness to engage with potentially challenging information outside of the experimental context. Given that each participant had voluntarily opted to receive stories that might cast doubt on their own partisan side, one might anticipate that this group would be especially prone to changing their minds.

A second notable limitation of this experiment is that the results focus exclusively on policy issues. Scholars investigating contemporary politics debate whether polarization has its roots in ideological or identity-based foundations [25, p. 59]. In accordance with this, recent research reveals that warm personal relations between political leaders can reduce partisan hostility, while policy compromise does not have the same effect [26], suggesting that animosity is not primarily driven by policy differences. Similarly, when it comes to online echo chambers, the presence of segregated online groups correlates with negative online interactions [27]. Isolated online groups may be more prevalent among conservatives [28], and the right can have an advantage in social media sharing [29]. But at the same time, banning content can worsen social media platforms [30] and having to reverse course after relegating certain views to the fringes [31] can affect public opinion.

While the Blindspot Report focuses primarily on substantive political stories, it might be assumed that the newsletter would only influence attitudes on policies, mirroring this paper’s focus on issues. However, future research could explore the effects of correcting media Blindspots on attitudes that extend beyond the issues. A grid question asking participants to evaluate the Blindspot Report was included in the recontact wave after all other questions had been answered. On average, respondents felt that the newsletter helped them become aware of others’ biases, understand more about views they disagree with, recognize their own biases, and become more empathetic towards political opponents. These responses encourage further investigation into how policy information can mitigate hostility between partisans.

Conclusion

In conclusion, these results indicate variation in how people respond to different types of stories, a finding that complements recent work attempting to understand how partisan media can influence attitudes [1, 2]. The observed heterogeneity in reactions introduces another layer to the ongoing debate on the role of partisan media in shaping public opinion [3, 4]. The fact that respondents seem to react selectively to certain stories but not others suggests a promising avenue for future research: exploring the kinds of news stories that compel people to reassess their preexisting beliefs. Consistent with this, recent studies have shown that when people are unfamiliar with specific issues, they are more likely to consume and share political information [32]. This suggests that the story about Biden’s refugee policy in the current experiment likely caught respondents off guard, having a significant impact on both issue polarization and policy stances. It is also possible that most of the participants had already heard about election interference, lab leaks, and vaccines, and formed opinions, while the story about Cuban refugees was news to everyone.

Overall, these findings demonstrate that highlighting news stories covered from across the partisan divide can, in certain contexts, reduce polarization and the gaps in issue positions between Democrats and Republicans. Addressing “Blindspots” through this non-confrontational, information-sharing approach may be a promising method to mitigate polarization. Given that covering contentious political issues is unavoidable, this study suggests that those aiming to curb the rise of partisan polarization can do so by emphasizing particularly surprising stories that are overlooked by one political side. Such stories indicate that even for hot-button issues, the partisan divide is not insurmountable. Future studies aiming to reduce polarization could benefit from incorporating surprising political information.

Supporting information

S1 Appendix

(PDF)

Data Availability

Replication data and code are available at: https://doi.org/10.7910/DVN/QLF6MV.

Funding Statement

Duke University provided funding for the incentive for participants to complete the surveys, and I later received a grant from the Institute for Humane Studies to fund the publication of this paper (grant: # IHS017528). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.D. Broockman and J. Kalla. The manifold effects of partisan media on viewers’ beliefs and attitudes: A field experiment with fox news viewers. OSF Preprint, 2022.
  • 2. Peterson E. and Kagalwala A. When unfamiliarity breeds contempt: How partisan selective exposure sustains oppositional media hostility. American Political Science Review, 115(2):585–598, 2021. doi: 10.1017/S0003055420001124 [DOI] [Google Scholar]
  • 3. Martin G. J. and Yurukoglu A. Bias in cable news: Persuasion and polarization. American Economic Review, 107(9):2565–99, 2017. doi: 10.1257/aer.20160812 [DOI] [Google Scholar]
  • 4.M. Levendusky. How Partisan Media Polarize America. University of Chicago Press, 2013.
  • 5.C. Bram. The most important election of our lifetime: Focalism and political participation. Political Psychology, 2023.
  • 6. Bessi A., Zollo F., Del Vicario M., Puliga M., Scala A., Caldarelli G., et al. Quattrociocchi. Users polarization on facebook and youtube. PloS one, 11(8): e0159641, 2016. doi: 10.1371/journal.pone.0159641 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Nyhan B., Settle J., Thorson E., Wojcieszak M., Barbera P., Chen A. Y., et al. Like-minded sources on facebook are prevalent but not polarizing. Nature, pages 1–8, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.D. O. Sears and J. L. Freedman. Selective exposure to information: A critical review. Public Opinion Quarterly, 1967.
  • 9.N. J. Stroud. Polarization and partisan selective exposure. Public Opinion Quarterly, 2010.
  • 10.S. Flaxman, S. Goel, and J.M. Rao. Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 2016.
  • 11. Peterson E., Goel S., and Iyengar S. Partisan selective exposure in online news consumption: evidence from the 2016 presidential campaign. Political Science Research and Methods, 9(2):242–258, 2021. doi: 10.1017/psrm.2019.55 [DOI] [Google Scholar]
  • 12. Allcott H. and Gentzkow M. Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2):211–36, 2017. doi: 10.1257/jep.31.2.211 [DOI] [Google Scholar]
  • 13. Del Vicario M., Vivaldo G., Bessi A., Zollo F., Scala A., Caldarelli G., et al. Echo chambers: Emotional contagion and group polarization on facebook. Scientific reports, 6(1):37825, 2016. doi: 10.1038/srep37825 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Del Vicario M., Scala A., Caldarelli G., Stanley H.E., and Quattrociocchi W. Modeling confirmation bias and polarization. Scientific reports, 7(1):40391, 2017. doi: 10.1038/srep40391 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Guess A.M., Barbera P., Munzert S., and Yang J. The consequences of online partisan media. Proceedings of the National Academy of Sciences, 118(14), 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.J. L. Nelson and J. G. Webster. The myth of partisan selective exposure: A portrait of the online political news audience. Social Media + Society, 2017.
  • 17.R. Arguedas, C. Robertson, R. Fletcher, and R. Nielsen. Echo chambers, filter bubbles, and polarisation: A literature review. 2022.
  • 18. Dahlgren P. A critical review of filter bubbles and a comparison with selective exposure. Nordicom Review, 42(1):15–33, 2021. doi: 10.2478/nor-2021-0002 [DOI] [Google Scholar]
  • 19. Broockman D.E., Kalla J.L., and Sekhon J.S. The design of field experiments with survey outcomes: A framework for selecting more efficient, robust, and ethical designs. Political Analysis, 25(4):435–464, 2017. doi: 10.1017/pan.2017.27 [DOI] [Google Scholar]
  • 20. Clifford S., Sheagley G., and Piston S. Increasing precision without altering treatment effects: Repeated measures designs in survey experiments. American Political Science Review;, 115(3):1048–1065, 2021. doi: 10.1017/S0003055421000241 [DOI] [Google Scholar]
  • 21. Mummolo J. and Peterson E. Demand effects in survey experiments: An empirical assessment. American Political Science Review, 113(2):517–529, 2019. doi: 10.1017/S0003055418000837 [DOI] [Google Scholar]
  • 22. de Quidt J., Haushofer J., and Roth C. Measuring and bounding experimenter demand. American Economic Review, 108(11):3266–3302, 2018. doi: 10.1257/aer.20171330 [DOI] [Google Scholar]
  • 23.C. Bram. Expectations for policy change and participation. Public Opinion Quarterly, 2023.
  • 24. Wojcieszak M., von Hohenberg B.C., Casas A., Menchen-Trevino E., de Leeuw S., Goncalves A., et al. Null effects of news exposure: a test of the (un) desirable effects of a ‘news vacation’and ‘news binging’. Humanities and Social Sciences Communications, 9(1):1–10, 2022. [Google Scholar]
  • 25.A. Abramowitz. The Disappearing Center: Engaged Citizens, Polarization, and American Democracy. Yale University Press, 2010.
  • 26.L. Huddy and O. Yair. Reducing affective polarization: Warm group relations or policy compromise? Political Psychology, page pops.12699, 2020.
  • 27.A. Quattrociocchi, G. Etta, M. Avalle, M. Cinelli, and W. Quattrociocchi. Reliability of news and toxicity in twitter conversations. In International Conference on Social Informatics, pages 245–256. Springer, 2022.
  • 28. Gonzalez-Bailon S., Lazer D., Barbera P., Zhang M., Allcott H., Brown T., et al. Asymmetric ideological segregation in exposure to political news on facebook. Science, 381 (6656):392–398, 2023. doi: 10.1126/science.ade7138 [DOI] [PubMed] [Google Scholar]
  • 29. Gonzalez-Bailon S., d’Andrea V., Freelon D., and De Domenico M. The advantage of the right in social media news sharing. PNAS nexus, 1(3):pgac137, 2022. doi: 10.1093/pnasnexus/pgac137 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Cinelli M., Etta G., Avalle M., Quattrociocchi A., Di Marco N., Valensise C., et al. Conspiracy theories and social media platforms. Current Opinion in Psychology, page 101407, 2022. [DOI] [PubMed] [Google Scholar]
  • 31. Bram C. When a conspiracy theory goes mainstream, people feel more positive toward conspiracy theorists. Research & Politics, 8(4):20531680211067640, 2021. [Google Scholar]
  • 32. Santoro L. R., Assaf E., Bond R. M., Cranmer S. J., Kaizar E. E., and Sivakoff D. J. Exploring the direct and indirect effects of elite influence on public opinion. Plos one, 16(11):e0257335, 2021. doi: 10.1371/journal.pone.0257335 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Francesco Pierri

4 Sep 2023

PONE-D-23-17089Beyond Partisan Filters: Can Underreported News Reduce Issue Polarization?PLOS ONE

Dear Dr. Bram,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Oct 19 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Francesco Pierri, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. Thank you for stating the following financial disclosure: 

"The author(s) received no specific funding for this work."

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

4. Please note that in order to use the direct billing option the corresponding author must be affiliated with the chosen institute. Please either amend your manuscript to change the affiliation or corresponding author, or email us at plosone@plos.org with a request to remove this option.

5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

6. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

7. Please ensure that you include a title page within your main document. You should list all authors and all affiliations as per our author instructions and clearly indicate the corresponding author.

8. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript. 

9. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

(1) You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an ""Other"" file with your submission. 

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

(2) If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. 

If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

10. We note that Figure 1 & Supporting Figures S2 to S5 includes an image of a participant in the study. 

As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. 

Please amend the methods section and ethics statement of the manuscript to explicitly state that the participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”. 

If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.

11. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

Additional Editor Comments:

Despite one negative review, I believe there is still room to improve the quality of the manuscript following reviewers' comments and meet the journal's standards of high quality.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The study examines the effects of comprehensive information about coverage biases on mitigating the impact of partisan media on polarization. To do so, the author seeks attitudinal changes towards four breaking news stories typical of those overlooked by partisan media sources. More in detail, the experiment aimed to examine biases in understanding party positions using political stories (treatments) embedded within an emailed newsletter that respondents were neither compelled nor prompted to read. This newsletter clusters articles from diverse outlets into single stories to underscore coverage biases. Bias is rated at the source level using an average rating from 3 independent non-partisan media monitoring organizations.

The author analyzed four breaking stories from July 14th 2021, to July 18th 2021, that received limited coverage from conservative or liberal-leaning media. On July 19th, 2,372 subscribers to the Blindspot Report were invited to take a survey that included questions about all four issues, their ideology, country of origin, partisanship, and if they were from the US. 1861 subscribers reported an email address matching a Blindsport Report subscriber. Results show that only one of these stories may have mitigated polarization and influenced issue positions, underlying how exploring the conditions that determine the dynamics behind polarization from polarized media is crucial.

Despite the attractive experimental design, which included a newsletter analysis, the study lacks the quality that a PLOS paper deserves, leading me to reject it. From an overall perspective, the longitudinal analysis proposed analyzes only 4 days during 2021, which is a too-narrow analysis window when it comes to studying the news diet of users in relationship with their political stance. Furthermore, the introduction could have adequately covered many aspects described in the study (e.g., echo chambers and polarization on the online ecosystem), and the description of the experimental design lacks important technicalities that make the paper opinable nor reproducible. Below, you can find the major and minor changes I propose.

I hope these observations will encourage you to improve the work's quality and emphasize its potential better.

Major Changes

1. The introduction would benefit from a description of polarisation and how it relates to traditional and social media. I suggest looking at the works of Quattrociocchi, Etta, Cinelli, Lazer, De Domenico

2. The newsletter's algorithm needs more technicalities, and, at the same time, no previous studies cited used such a newsletter.

3. The lack of this information makes the experiment lacking soundness. Please, provide a better explanation of how this algorithm works and, if possible, include other studies that have already used this newsletter

4. Is bias rated on a discrete or continuous level? Please, provide a more detailed explanation of this rating

"Ground News' content suggests that its subscribers have an intrinsic interest in exploring narratives beyond conventional partisan lines." is this sentence supported by their statement? Please provide a link to that

5. For transparency purposes, the paper would benefit from a distribution of the biases of the newsletter on a broad timespan to quantitatively assess the objectiveness of the company itself

6. "Consequently, they could be more open to persuasion and exhibit more flexibility in their attitudes and beliefs than others." The lack of cited studies makes this sentence just an opinion from the author. I suggest bringing studies evidencing how the lack of a political leaning leads to better persuasion. Furthermore, the part "might exhibit more flexibility in their attitudes and beliefs than others." is too general. What does having a flexible attitude or belief mean? Furthermore, please provide studies that compare those with a "flexible attitude and belief" to those who don't have that

7. "I selected four breaking stories from July 14th 2021 to July 18th 2021 that received limited coverage from either conservative or liberal-leaning media." What does limited mean? Is there a coverage threshold or measure that defines a story as preferred from a specific political side from a more objective one?

8. "Of the initial 2,372 respondents, 1,861 reported an email address matching a Blindspot Report subscriber." I do not get the math here. If subscribers to the Blindspot Report were invited to take the survey, how can only 1861 of them be subscribers? They should have been 2372. Please, provide a better description of this.

9. Was the survey composed of open questions or closed ones? In the latter's case, what were the questions' interval ranges?

"In conclusion, these findings demonstrate that dismantling information echo chambers may, in certain contexts, reduce polarization and the gaps in issue positions between Democrats and Republicans." this sentence does not consider the concept of echo chambers. Here, the study does not account for the topology of the echo chambers. Therefore, no statement can be made about echo chambers but just the polarization itself.

Minor Changes

1. The motto of Fox News reads, Fair and Balanced," should have an opening quote instead of a comma

2. The New York Times proclaims, All the News That's Fit to Print." should have an opening quote instead of a comma. Moreover, the period at the end of the quote should be moved after the closing quote

3. "This raises the question: Can" should not have a capital C

4. "[…] either conservative- or liberal-leaning media." should not have a dash after conservative

5. "[…] were from the United States, their partisanship." should have the citing number before the period

6. The sentence "But ATEs are not the only useful way to analyze the data." should be removed since it looks too journalistic. The concept and motivations are already described in the following sentences. Just reframe them to make the reading smoother

7. Page 12," blindspot": please correct the opening quoting

Reviewer #2: This manuscript evaluates the results of a field study to assess if receiving information about issue bias may reduce political polarization and influence issue position. In the experiment, self-selected participants were exposed to coverage bias information on four underreported news topics in a newsletter, and their issue position and polarization stance were measured using a pre/post repeated measures survey design. I find this to be an important and compelling study, which in its current form has a some clarity issues and technical issues with how the hypotheses are set up and the results are discussed.

On the whole, I find this to be an interesting research question and a compelling study design. Many lab studies have shown the impacts of specific news interventions on issues like misinformation and polarization, but fewer are able to replicate them in the field. This is thus a valuable contribution as a field experiment, which the authors rightfully add to the list of field studies where the effect is harder to demonstrate than in a more controlled setting.

Additionally, I find that the focus on people who are already willing to have their minds changed is particularly interesting. As the authors state, this audience may be more malleable than the average population, which makes this a relevant population to target where we might expect significant effects.

The work is well-positioned within the extant literature. The authors may wish to acknowledge that the manipulation, being a one-time, small change may not have had a large impact on such important outcomes as issue polarization. For a related discussion, I find the discussion in Wojcieszak’s “Null effects of news exposure: a test of the (un)desirable effects of a ‘news vacation’ and ‘news binging’” to be a good discussion of how news constitute a small proportion of screen time and effects may thus be more challenging to measure in the field.

While I find this paper to be well presented and intelligible. there were a few areas I think should be clearer. As an improvement, I would suggest that the author make their hypotheses clearer — on page 7, they refer to “16 potential hypotheses under investigation,” but these are never explicitly outlined in the setup of the paper. The lack of clear hypotheses also make it challenging to interpret the second set of analyses presented from Page 12 onwards — are these additional, interesting exploratory analyses, or are they new contributions of this work to the literature? For additional clarity, the significant p-values used in table 1-4 should be aligned with the p-values discussed in the text (and p-values <0.1 should not be highlighted, in my opinion). Also the p-values thresholds used in table 5 should be stated in the table. There are also parts where the language feels too informal, such as a paragraph which begins with “But” on page 12 and uses of contractions (e.g. “doesn’t”). There are also missing quotation marks in the first sentence. While the experiment is well-described on page 6 and figure 1 aids comprehension, the setup is sufficiently complex that a reader would benefit from an additional figure or table to clarify which participants were exposed to which treatments.

I do not think the statistical analysis is set up in such a way as to make the findings most generalizable and interpretable. The author sets themselves up to assess 16 potential hypotheses using Bonferroni-adjusted p-values. The resulting threshold is rather stringent, and so the authors report results under both the Bonferroni-adjusted criteria and traditional 0.05 criteria. Overall, this makes the results challenging to interpret and detracts from the contribution of this paper. It is then unclear whether we should take this study as a) a study that continues a trend of null findings among field studies on news and their impact on attitudes, or b) a study with modest results that should be assessed and discussed as a contribution to the literature. Currently, the results section reads as a little bit of both, which makes it challenging to assess a clear contribution.

I believe one way to mitigate this would be to lead with an aggregated hypothesis and analysis that better sets up the broader question this paper is trying to answer. The broad question is whether exposure to information about the biased coverage of a news story can change perceptions of issue importance, issue position, issue polarization, and issue approval difference. Currently, the central hypotheses are split into 16 parts: 4 per issue x 4 per outcome, which significantly reduces the Bonferroni-adjusted p-value under which the author would accept a significant result. It is not clear to me why the author would lead with the issue-specific findings when they are looking for an overall trend. I might instead consider foregrounding an analysis more similar to that from Table 5 as the main contribution, which has the outcome variable aggregated across all four issues. This would reduce the number of hypotheses that need to be tested and adjusted for. The specific story breakdowns would then serve more as an exploratory analysis of how this effect might look different for different issues. Again, I think a clear presentation of the hypotheses would also make the contributions of the analyses easier to assess.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Yongjun Zhang

11 Dec 2023

PONE-D-23-17089R1Beyond Partisan Filters: Can Underreported News Reduce Issue Polarization?PLOS ONE

Dear Dr. Bram,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Both reviewers are positive about the revision, but still needs some revisions. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. 

Please submit your revised manuscript by Jan 25 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Yongjun Zhang

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: (No Response)

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: I thank the author for incorporating the comments from reviewers. I find the manuscript much improved, with a few remaining areas for improvement. Overall, I think the paper and findings are both clearer and cleaner, which is fantastic. The new framing around the results make this a neat and targeted jumping-off point for future research, and I admire the author having run a field experiment at scale. I also thought the discussion was significantly strengthened and better helped to understand how to interpret these findings in context.

The analysis is much clearer and I think the de-emphasis of the ATE analyses serves to hone the central message of the paper. My one remaining concern, which I initially missed in the first revision, is that the R^2 and adjusted R^2 appear to be near-zero for all models in Table 1. While the R^2 is of course not the only way to assess goodness-of-fit, and does not impact the coefficient effect significance, such a low R^2 value is quite concerning to me. I am wondering if the author could explain why they believe the R^2 value to be so low? Perhaps they could try to account for demographic variables as an additional control? This can be added to the appendix if they do not lead to a substantially different conclusion.

While overall I like the changes in the introduction and discussion, recent studies about echo chambers/filter bubbles have found mixed evidence that people truly live in online echo chambers. (e.g. “Echo chambers, filter bubbles, and polarisation: a literature review”, by Arguedas et al. 2022, or Dahlgren et al., 2021, for a comparison between filter bubbles and selective exposure theories, “A critical review of filter bubbles and a comparison with selective exposure.”) Though I think it is fine to motivate the work through the echo chamber lens, I think more hedging/critical language when talking about echo chambers, or just a nod to the fact that these theories are contested would serve the paper well. Alternatively (or in parallel), strengthening the selective partisan exposure piece of the motivation (lines 31-36) would also address these concerns.

A small, personal nitpick is that in the paragraph lines 195-208, the irreferential word “This” is used five times to start a new sentence. This practice generally makes arguments harder to follow (e.g. the difference between starting the current sentence with “This makes arguments hard to follow” vs “This practice makes arguments harder to follow.”)

Reviewer #3: I carefully read the resubmitted version and the response memo to editors and reviewers. The revised manuscript provides a clear statement on motivation, research design, and interpretation on the results. I believe that the authors handled most of the reviewers’ comments well. Since I did not review the original manucript, I provide two additional comments based on the resubmitted version.

First, I think the authors should be more cautious about the selection of news articles serving as instruments. Reviewer 1 has provided some comments on algrothm of the newsletter and selection of news articles. I agree with these comments. After I looked at the letter, I doubt that the author responded these comments well. The author should provide details on how the newsletter rated the news stories using “natural language process”. This is crucial to the paper because the ideological spectrum is the basis of designing experiment instruments. Also, the author argued that “the selection of stories was ultimately based on my judge,emt about issue salience at the time, a point which I am now clearer about in the paper”. I would argue that this selection criteria could also affect the experimental results because it was likely that different set of news stories could lead to different results (possibly). I would suggest that a more clear and detailed statements on the limitation of the selection of news articles would be necessary.

Second, some statements in the article are not convincing. For instance, the author stated “one can imagine much larger effects were people to consistently receive information about their Blindspots on one or more issues over time”. This statememt should be based on previous research which concluded the larger effect of repeated exposure to biased/selected information on attitudes. Otherwise, this statement is not convincing.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Feb 16;19(2):e0297808. doi: 10.1371/journal.pone.0297808.r004

Author response to Decision Letter 1


8 Jan 2024

Dear Editors and Reviewers,

Thank you for the opportunity to revise and resubmit my manuscript and for the detailed comments.

Turning directly to the comments from Reviewer #2, I completely agree that the low R^2 is a concern when interpreting these results. As suggested by this reviewer, the Appendix now includes an additional section which incorporates demographic controls into the main analysis. I do not find that this makes a meaningful difference in the share of variation explained. I now flag the low R^2 numbers in the main text to ensure readers are aware of this issue.

I also agree that my discussion of echo chambers and selective exposure did not incorporate important nuance in research on these topics. The citations suggested by this reviewer were helpful and I now incorporate both in the introduction. Finally, I have corrected the overuse of the word “This” in the paragraph mentioned.

Turning to comments from Reviewer #3. I agree that using my judgement to select the stories may have affected the results, and that a more systematic way of selecting stories would be a natural next step for this experimental design. I have now clearly noted this in the manuscript so that readers are aware of this important limitation. Ground News’s algorithms are proprietary, a point which I also now note in the manuscript.

Finally, I have gone through the manuscript again and have attempted to soften unconvincing language suggested by this reviewer. I revised the speculation about effects in a hypothetical longer term study to say that effects may be different, not necessarily “much larger” as I had before.

Once again, thank you all for your time and attention and I am happy to answer any questions. I look forward to hearing from you.

Sincerely,

Curtis Bram

Assistant Professor

UT Dallas

Decision Letter 2

Yongjun Zhang

15 Jan 2024

Beyond Partisan Filters: Can Underreported News Reduce Issue Polarization?

PONE-D-23-17089R2

Dear Dr. Bram,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Yongjun Zhang

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

After reviewing your paper and response memo, I believe this paper meets PLOS One's publication criteria and can make a great contribution to the current literature. 

Reviewers' comments:

Acceptance letter

Yongjun Zhang

7 Feb 2024

PONE-D-23-17089R2

PLOS ONE

Dear Dr. Bram,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Yongjun Zhang

Academic Editor

PLOS ONE


Articles from PLOS ONE are provided here courtesy of PLOS

RESOURCES