Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2024 Oct 9;19(10):e0310216. doi: 10.1371/journal.pone.0310216

The illusion of information adequacy

Hunter Gehlbach 1,*, Carly D Robinson 2, Angus Fletcher 3,*
Editor: Gal Harpaz4
PMCID: PMC11463766  PMID: 39383156

Abstract

How individuals navigate perspectives and attitudes that diverge from their own affects an array of interpersonal outcomes from the health of marriages to the unfolding of international conflicts. The finesse with which people negotiate these differing perceptions depends critically upon their tacit assumptions—e.g., in the bias of naïve realism people assume that their subjective construal of a situation represents objective truth. The present study adds an important assumption to this list of biases: the illusion of information adequacy. Specifically, because individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation. Participants in our preregistered study (N = 1261) responded to a hypothetical scenario in which control participants received full information and treatment participants received approximately half of that same information. We found that treatment participants assumed that they possessed comparably adequate information and presumed that they were just as competent to make thoughtful decisions based on that information. Participants’ decisions were heavily influenced by which cross-section of information they received. Finally, participants believed that most other people would make a similar decision to the one they made. We discuss the implications in the context of naïve realism and other biases that implicate how people navigate differences of perspective.

Introduction

You don’t know what you dont know.

        --Socrates

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we dont know we don’t know.

        --Donald Rumsfeld

A core challenge in navigating the social world, is how to negotiate differences of perception, attitude, and understanding [1]. Whether the context entails romantic couples, bosses and employees, or heads of state brokering multi-lateral decisions, everyone experiences instances in which different actors maintain divergent perspectives despite exposure to the same information.

The tacit assumptions that people bring to situations powerfully influence the resolution of these differences of perspective as well as the ongoing relationships between the parties. For example, psychologists have shown evidence for a default belief that one’s personal, subjective views represent objective understandings of reality [2]. This notion of naïve realism [3, 4] represents one of many biases that exemplifies how our baseline assumptions—in this case, the assumption that one sees objective reality—can wreak havoc on attempts to navigate different perspectives. As naïve realists, individuals assume that rational others will agree with their reactions, behaviors, and opinions. When others disagree, presumably these other people (a) have been exposed to different information, (b) are unwilling or unable to logically proceed from objective evidence to a reasonable conclusion, or (c) are biased [4].

Naïve realism studies often focus on the divergent construals of prominent conflicts such as the abortion debate [3], affirmative action [5], the Israeli-Palestinian relationship [6], and other well-known current events [7]. Thanks to our neural architecture, perceivers conflate their subjective interpretation of the situation with objective reality (e.g., presuming one’s opinion to be a consensus fact). In short, our brain’s default setting makes it hard to view others’ construals as reasonable [8].

We propose that an equally important tributary feeding the river of misunderstanding is the illusion of information adequacy—people tacitly assume that they have adequate information to understand a situation and make decisions accordingly. Yet, individuals often have no way of knowing what they don’t know. For less prominent interpersonal issues, many misunderstandings arise in contexts where information may be less accessible to each party. From Socrates to Rumsfeld, people often acknowledge that there is much that they do not know, including a meta-awareness of “unknown unknowns.” We argue that another default setting—comparable to naïve realists’ assumptions that they see objective reality—is that people fail to account for the unknown unknowns. Accordingly, they navigate their social worlds confidently assuming that they possess adequate information. They form opinions, reify values, and behave in ways that suggest that they have sufficient information to make rational decisions—often without pausing to wonder how much they do not know.

For example, many drivers have pulled up behind a first car at a stop sign only to get annoyed when that car fails to proceed when traffic lulls at the intersection. Drivers of these second cars may assume they possess ample information to justify honking. Yet, as soon as a mother pushing her stroller across the intersection emerges from beyond their field of vision, it becomes clear that they lacked crucial information which the first driver possessed. This example highlights the distinction with naïve realism. In the driving example, both parties construe the situation identically—nobody should drive over pedestrians pushing baby strollers. Rather, it is the second driver’s tacit assumption that they have “enough” information that precipitates the misunderstanding.

While seemingly trivial, this everyday example epitomizes a pervasive phenomenon. People constantly judge others’ actions and beliefs because they assume that they possess enough relevant information to make fair evaluations. As a result, perceivers misunderstand others’ attitudes, opinions, and beliefs even though these perceivers might share the same perspective were they privy to the same information.

To demonstrate the illusion of adequate information, we conducted two preregistered experiments in which participants recommended whether to merge two schools or keep them separate—fully documented in our preprint (https://osf.io/preprints/psyarxiv/fdu3m). Given the similar results of both studies, this article focuses on the second, replication experiment because of its larger, more diverse sample, enhancements to our measures, and improved research design.

Our primary goal was (a) to experimentally test whether perceivers assumed that they possessed adequate information—i.e., that they would view the cross-section of information to which they were privy as being sufficiently relevant, important, adequate in quantity, etc. In addition, we expected that participants would: (b) feel competent to make decisions, (c) be strongly influenced by the cross-section of information that they saw, and (d) assume that most others would make similar decisions to the one they made. We also anticipated that, (e) if participants were later exposed to the full array of information, they would continue to endorse their original position [9]. Results provide broad support for our specific hypotheses that, in the context of this experiment, participants maintain the illusion that they have adequate information, and this illusion impacts several downstream outcomes related to decision-making.

Materials and methods

The procedures for both studies were approved by the Johns Hopkins University Human Subjects Committee, and all participants provided written informed consent prior to participation by clicking a button that indicated their consent. Data collection occurred from June 7th to 8th, 2023. All survey measures and the three articles about the potential merging of schools that formed our main experimental manipulation (see the Intervention tab) can be found in our codebook: https://osf.io/pgezc. The data and STATA code for the study are available from https://osf.io/k37dg/.

Participants

We recruited an initial convenience sample of 1501 participants within the United States using the online platform, Prolific. We excluded those who failed an attention check question prior to entering the study (n = 240). Our final sample (N = 1261) had 252 participants in the control group, 503 in the pro-merge condition, and 506 in the pro-separate condition.

The majority male sample (59%) was predominantly White (71%); others identified as Asian/ Pacific Islander (9%), Black (11%), or Other (3%), and 8% identified as Hispanic. Participants’ mean age was 39.8 years (sd = 13.6), with a median education level of 3 years of college. Politically, 651 participants identified as liberals, 254 as moderates, and 356 as conservatives.

Procedures

Participants read an article, “Our School Water is Disappearing,” that described a school located in a region where the local aquifer was drying up. Consequently, the school faced a decision to risk staying put (and hope for more rain in the future) or to merge with another school (entailing other risks). We assigned participants to one of three conditions. The control group’s version of the article presented information about seven features of the situation: three arguments described benefits of merging, three identified benefits of remaining separate, and one was neutral. The pro-merge treatment participants’ article presented the three arguments describing the benefits of merging and the neutral piece of information; the pro-separate participants’ article presented three arguments about the benefits of remaining separate and the neutral information. After reading the article and responding to initial questions, we randomly sub-divided each treatment group in half to either: (a) respond to a set of survey questions (regarding their perceptions of information adequacy, decision-making competence, consensus, and other exploratory measures) or (b) to read a second article that exposed them to the remaining arguments to merge or maintain separation of the two schools (i.e., providing them equivalent information to the control group) so that they could update their recommendations as they saw fit. See Fig 1 for details of the study design.

Fig 1. Procedural flow for participants in different conditions.

Fig 1

The additional randomization within each treatment arm of the study allowed us to assess participants’ beliefs about the information that they had (treatments 1a and 2a) and how those beliefs potentially changed after reading the second article (treatments 1b and 2b) without biasing respondents—i.e., had we asked participants how adequate their information was after the first article and then showed them a second article with countervailing information, they may have felt as though the experimenters were trying to trick them. The italicized words in the figure correspond to the names of the variables described in the measures section.

Measures

Four distinct measures comprised the primary outcomes of interest in our five prespecified hypotheses. Our 5-item information adequacy scale (a = .79) assessed the extent to which participants felt as though the information they read in the article was sufficient in quantity, relevant, believable, important, and trustworthy enough to provide an “adequate” understanding of the situation the school was facing.

The 5-item decision-making competence scale (a = .76) measured students’ perceptions that they could make a competent recommendation regarding the fate of the school based on the article that they read.

The decision to merge (M = .55) outcome was a binary choice. Participants responded to, “What decision would you recommend that the school board make.” Response options were “Merge Prairie View into Landon” or “Keep the schools separate.” For treatment groups 2a and 2b, this measure was asked a second time after these participants were exposed to the remaining arguments in a second article to test our 5th hypothesis.

Our consensus outcome was a single item (M = .65, sd = .16) designed to assess the false-consensus effect [10]. On a 0-to-100 slider, participants responded to, “About what percentage of people do you think would make the same decision as you did?”

To conduct exploratory analyses that could provide convergent data regarding our key outcome measures, several other measures were added. On 0–100 slider bar scales, all participants were asked to rate their confidence in their decision using a single item, “How confident are you that this is the smartest action for the school board to take?”

Treatment groups 1a, 2a, and the control group also responded to the following three items on 0–100 slider bars:

  • “To what extent do you feel that you understand enough of the key details of the situation to make a good decision?”

  • “To what extent do you feel as though you still have questions about important details of Prairie View’s situation?”

  • “How curious would you be to read a second article to learn more about this school’s dilemma?”

For treatment participants who were instead routed to read a second article with the arguments they had not previously seen (i.e., groups 1b and 2b), we reassessed their recommendations using our decision to merge, consensus, and confidence measures. These participants then completed an information comparison scale (a = .88) which asked participants to compare the information between the first and second articles on criteria like relevance, trustworthiness, and importance. Finally, we reassessed these treatment participants’ perceptions with the key details and questions items described above.

Results

Preliminary analyses

In line with our preregistration, we conducted a multinomial logistic regression to assess whether participant demographics predicted condition assignment. We found no evidence to that effect, LRχ2(14) = 11.26, p = .665, pseudo R2 = 0.005. Thus, we did not employ any covariates in any of the models we report (i.e., there were no imbalances in condition assignment to correct for). We present the descriptive statistics and correlations for our primary variables of interest in Table 1.

Table 1. Descriptive statistics and correlations.

Mean SD N 1 2 3 4 5 6 7 8 9 10 11 12 13
1) Information adequacy .69 .14 1242 --
2) Decision-making competence .81 .12 714 .47*** --
3) Decision to merge .55 714 .17*** .04 --
4) False consensus .65 .16 1183 .34*** .27*** .16*** --
5) Confidence in recommendation .70 .21 1183 .41*** .33*** .15*** .56*** --
6) Key details .64 .23 714 .64*** .37*** .11** .44*** .54*** --
7) Lingering questions .49 .30 714 .35*** .17*** .07 .19*** .25*** .38*** --
8) Curiosity .70 .27 714 .14*** .17*** -.04 .05 .02 .02 -.02 --
9) Post: Decision to merge .52 466 .31*** .04 -.00 --
10) Post: False consensus .62 .17 460 .05 .49*** .22*** .07 --
11) Post: Confidence in recommendation .66 .21 460 .02 .31*** .59*** .08 .56*** --
12) Post: Information comparison .82 .24 459 -.03 -.12* -.17*** -.02 .02 -.07 --
13) Post: Key Details .68 .22 459 .03 .34*** .45*** .06 .41*** .58*** .01 --
14) Post: Lingering questions .51 .30 459 .02 .14** .15** .04 .21*** .25*** -.04 .30***

Note

* p < .05

**p < .01

***p < .001.

Pre-specified hypotheses

First, we wanted to assess whether participants perceived themselves as having adequate information about the situation even though the treatment groups received only half the information of the control group. Specifically, we tested whether the combined treatment groups 1a and 2a (who were routed directly to survey questions immediately after making their initial recommendations) would perceive the adequacy of the information that they received as comparable to the control group. Our OLS model supported this prespecified hypothesis bPooled = 0.00, 95% CI [-0.021, 0.022], adjusted R2 = -.001. In addition, following Lakens et al.’s guidance [11], we conducted a two-sample t-test for mean equivalence (a = .01) to assess whether the pooled treatment group and the control groups were statistically equivalent. The test revealed that the null effect was statistically indistinguishable from zero. Descriptively, the raw mean responses were closer to the “quite” (relevant, believable, trustworthy, etc.) than the “somewhat” response option for all three groups.

Next, we tested whether the pooled treatment groups 1a and 2a would perceive their ability to make a competent decision to be as high as the control group’s perceived ability. Our OLS model predicting participants’ scores on our decision-making competence scale supported this prespecified hypothesis also bPooled = 0.00, 95% CI [-0.018, 0.019], adjusted R2 = -.001. The same two-sample equivalence test showed that we could reject the null hypothesis that the treatment effect is different than zero (a = .01); the pooled treatment and the control groups were statistically equivalent in their perceived decision-making competence [11]. In addition to the consistency of the means between the groups (MControl = .81; MTreatment Pro-Merge = .81; and MTreatment Pro-Separate = .80), it is worth noting the relatively high values of the means—on average, participants responses were modestly higher than qualitative values corresponding to “quite” objective, careful, fair, etc. in evaluating the information and making a decision.

For our third prespecified hypothesis, we fit an OLS regression model to test for differences in the initial recommended decisions between our experimental conditions. Congruent with our hypothesis, we found that “pro-merge” treatment participants (groups 1a and 1b) were significantly more likely to recommend that the schools merge bPro-merge = 0.33, 95% CI [0.27, 0.40]; and the “pro-separation” participants (treatment groups 2a and 2b) were significantly more likely to recommend that the schools remain separate bPro-separate = -0.32, 95% CI [-0.38, -0.26], as compared to the control group adjusted R2 = .34. See Fig 2.

Fig 2. Mean proportions of participants from each condition recommending merging and 95% CIs.

Fig 2

Then we examined whether each group would presume that a majority of people would make the same recommendation that they had made—i.e., display the false consensus effect. To evaluate this prespecified hypothesis, we conducted three single-sample t-tests (for the control group, the pooled treatment group of 1a and 1b, and the pooled treatment group of 2a and 2b) to see whether each mean was significantly higher than 50%. We found that participants in each condition believed that the majority of other people would reach the same recommendation that they did; Mcontrol = .62, t(241) = 5200, 95% CI [.60, .63]; Mpro-merge = .68, t(477) = 6500, 95% CI [.66, .69]; and Mpro-separate = 0.65, t(462) = 6800, 95% CI [.63, .60].

Finally, we predicted that treatment groups 1b and 2b (who, after making their initial recommendations, read a second article providing the other half of the information that the control group received) would endorse their initial recommendation in significantly higher proportions than the control group (55% of whom recommended merging). In other words, we anticipated that the original information these treatment groups received—despite its partial nature—would help participants form opinions that would be hard to reverse, even in the face of learning compelling information to the contrary. Our data did not support this hypothesis. The majority of the participants in these two conditions did adhere to their original recommendation (MPro-merge = .64, t(235) = 1600, 95% CI [.57, .70]; MPro-separate = .68, t(229) = 1600, 95% CI [.62, .74]) after learning the additional information from the second article. However, overall, these two groups endorsed the recommendation to merge the two schools at comparable rates to the control group. See Fig 3.

Fig 3. Treatment groups’ mean endorsements of merging recommendation before (T1) and after (T2) reading additional information with 95% CIs.

Fig 3

Exploratory findings

To further our understanding of the illusion of adequate information and the robustness of our prespecified findings, we included a series of additional measures for exploratory purposes. As one example, each group also reported how confident they were about the recommendation that they provided. If people frequently fail to consider what they do not know about a situation, we anticipated that participants’ confidence about their decisions would be largely unaffected by the amount of information they possessed.

As shown in S1 Fig, despite having half as much information, treatment groups 1a, 1b, 2a, and 2b all report greater mean levels of confidence in their initial recommendations than the control group. Pooling the means of all the treatment groups (MPooledTreatment = .71, sd = .21) reveals that treatment participants actually reported significantly greater initial confidence than their control counterparts (MControl = .65, sd = .21; t(1181) = 4.22, p < .001, d = -0.30, 95% CI [-.45, -.16]. Participants in treatment groups 1b and 2b re-rated their confidence after exposure to the additional information, allowing for a within-subjects test of their confidence. A paired-sample t-test revealed that these participants became significantly less confident from their initial ratings (MInitialPooled = .71, sd = .22) to their final ratings (MFinalPooled = .67, sd = .21); t(459) = 5.031, p < .001, d = 0.17, 95% CI [.06, .28]. In other words, across two different analyses we found that those with less information manifested greater confidence in their recommendations.

A second set of exploratory analyses further assessed how participants’ perceptions of whether they had adequate information to make a decision changed after exposure to new information. S2 Fig shows participants’ mean ratings on the 0–100 visual analog scales across all five branches of the study regarding whether they understood enough key details to make a decision (Panel A) and whether they still had questions about the situation (Panel B). The means for each group hovered around two-thirds and half, respectively. These results provide additional, descriptive support for our core hypothesis that individuals assume they have adequate information to make recommendations even when some of them (the control group and treatment groups 1b and 2b) received twice as much information as others (treatment groups 1a and 2a) and despite all groups acknowledging that they still have some lingering questions.

Discussion

Anecdotally, friction between parties holding divergent perceptions seems particularly prevalent in today’s world—clashes over vaccines, abortion rights, who holds legitimate claims to which lands, and climate change surface in the news regularly. Conflicts on social media, between passengers and flight attendants, and among parents and referees at children’s sporting events seem equally common. Correspondingly, the need to ascertain the various causes of poor perspective taking feels particularly acute at present.

Naïve realism offers one clear explanation of how our default assumptions can derail our attempts to understand others’ perspectives [3]. We complement this line of research by proposing that a second major contributor to misunderstanding the perspectives of others is the illusion of adequate information.

Extending previous results (https://osf.io/preprints/psyarxiv/fdu3m), this study provides convergent evidence that people presume that they possess adequate information—even when they lack half the relevant information or be missing an important point of view. Furthermore, they assume a moderately high level of competence to make a fair, careful evaluation of the information in reaching their decisions. In turn, their specific cross-section of information strongly influences their recommendations. Finally, congruent with the second tenet of naïve realism [4] and the false consensus effect [10], our participants assumed that most other people would reach the same recommendation that they did.

Contrary to our expectations, although most of the treatment participants who ultimately read the second article and received the full array of information did stick to their original recommendation, the overall final recommendations from those groups became indistinguishable from the control group. Unlike Lord et al. [9], this result supports the idea that sharing or pooling of information may lead to agreement [4]. We suspect that prior attitudes regarding the topic in question may account for the contrasting results here. Lord et al. [9] focused on capital punishment—a well-known issue for which many hold strong prior opinions. We asked participants about a local, fictitious scenario—for which they were unlikely to hold strong prior opinions given the hypothetical nature of the scenario. Thus, changing the minds of our participants with new information was likely much easier than for participants in the Lord et al. [9] study.

We interpret our results as complementing the theory of naïve realism by illuminating an important, additional psychological bias that fuels people’s assumption that they perceive objective reality and potentially undermines their motivation to understand others’ perspectives [12]. Specifically, because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not.

How might greater doubt or humility facilitate the understanding and appreciation of others’ perspectives? We suspect that in many real-world situations individuals rarely pause to consider how complete their picture of a situation is before passing judgment. Comedian George Carlin’s observation—that, relative to our driving, slower drivers are idiots and faster drivers are maniacs—is often invoked in discussions of naïve realism, [e.g., 13]. Perhaps the mere act of wondering, “do I have enough information?” could infuse humility and a willingness to allow more benefit of the doubt. Was the driver who just zoomed past en route to the hospital with a family emergency? Is the painfully slow driver transporting a fragile family heirloom? These types of questions might dissolve road rage (George Carlin’s or our own) before it begins.

The illusion of adequate information may inform other theories as well. People may commit the fundamental attribution error more to the extent that they assume their knowledge about the situation is adequate [14]. Perhaps part of the reason stereotyping is a common approach to perceiving others [15] is because well-developed stereotypes allow for the illusion that perceivers have sufficient information about a target person. When predicting their own future emotional states, individuals may perform poorly because they assume that they have adequate information about their future circumstances; in reality, researchers have shown that people often fail to account for key details in their affective forecasting efforts [16, 17].

In addition to these theoretical implications, our findings may suggest practical strategies for improving people’s social perspective taking capacities. Teaching individuals to pause and question how much information they know that they know about a situation—and, importantly, how much they might not know—could be a helpful disposition to cultivate. A concrete version of this strategy might entail listing relevant, lingering questions about a situation prior to passing judgment. Interventions in which people strategize how to learn more information about a situation or another party’s beliefs might also be worthy of investigation.

Some readers may worry that our results seem so obvious as to be trivial. Our treatment participants had no way of knowing that they were deprived of a whole slate of arguments; naturally they would assume that they had adequate information. Others may worry that we stacked the deck by presenting the pro-merge participants with almost exclusively pro-merge arguments (and vice-versa for pro-separate participants). This concern, as well as the hypothetical scenario that may have seemed unimportant to our online participants, represent important limitations. At the same time, we suspect these features of our experiment represent exactly how this phenomenon unfolds in many real-world situations. People frequently have no way of knowing the extent to which the information they hold is complete or missing key elements. Relatedly, given polarized political and social media eco-systems, individuals are also regularly exposed to extremely unrepresentative cross-sections of information. Given strong motivations for cognitive efficiency [12, 18], people may not naturally want to expend extra effort considering what may not be known or how representative a sample of information is. Thus, our manipulation may serve as a reasonably prototypic illustration of how this bias unfolds in real world settings.

To be sure, this bias warrants more investigation. Future research that can investigate the generalizability of this phenomenon across a range of issues—including topics where people have prior knowledge and beliefs—is an important first step. We conceptualized “adequate” information broadly—asking participants to evaluate relevance, quantity, importance, trustworthiness, and credibility. Other studies that define the construct more narrowly—perhaps examining only the quantity of information provided—would provide additional insights into this phenomenon. Assuming similar evidence is found across issues and in real-world settings, then testing interventions to mitigate this bias and its downstream effects, will be another important contribution to this research agenda.

Although people may not know what they do not know, perhaps there is wisdom in assuming that some relevant information is missing. In a world of prodigious polarization and dubious information, this humility—and corresponding curiosity about what information is lacking—may help us better take the perspective of others before we pass judgment on them.

Supporting information

S1 Fig. Initial confidence in decision by condition and 95% CIs.

Comparison of the four different treatment groups versus the control group in their initial responses to the question, “How confident are you that your recommendation is the smartest action for the school board to take?”.

(TIF)

pone.0310216.s001.tif (76.8KB, tif)
S2 Fig

Panel A. Perceived Key Details Received by Condition and 95% CIs. Comparison of the four different treatment groups versus the control group in their responses to the question, “Sometimes people are satisfied that they have all the information that they need to make a decision. Other times, they feel that they need more information. After reading the article, to what extent do you feel that you understand enough of the key details of the situation to make a good decision?”, Panel B. Perceived Lack of Lingering Question by Condition and 95% CIs. Comparison of the four different treatment groups versus the control group in their responses to the question, “To what extent do you feel as though you still have questions about important details of Prairie View’s situation?”.

(ZIP)

pone.0310216.s002.zip (121.8KB, zip)

Acknowledgments

The authors are grateful for the helpful thoughts and feedback provided by Andrew Ward.

Data Availability

All relevant data for this study are publicly available from the OSF repository (http://doi.org/10.17605/OSF.IO/Q2UFB).

Funding Statement

Dr. Hunter Gehlbach received start-up funds from Johns Hopkins University School of Education. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Gehlbach H, Mu N. How we understand others: A theory of how social perspective taking unfolds. Review of General Psychology. 2023;27(3):282–302. doi: 10.1177/10892680231152595 [DOI] [Google Scholar]
  • 2.Ross L. From the fundamental attribution error to the truly fundamental attribution error and beyond: My research journey. Perspectives on psychological science. 2018;13(6):750–69. doi: 10.1177/1745691618769855 [DOI] [PubMed] [Google Scholar]
  • 3.Robinson RJ, Keltner D, Ward A, Ross L. Actual versus assumed differences in construal: ’Naive realism’ in intergroup perception and conflict. Journal of Personality and Social Psychology. 1995;68(3):404–17. doi: 10.1037/0022-3514.68.3.404 1995-25009-001. [DOI] [Google Scholar]
  • 4.Ross L, Ward A. Naive realism in everyday life: Implications for social conflict and misunderstanding. In: Reed ES, Turiel E, editors. The Jean Piaget symposium series Values and knowledge. The Jean Piaget symposium series. Mahwah: Lawrence Erlbaum Associates Inc; 1996. p. 103–35. [Google Scholar]
  • 5.Sherman DK, Nelson LD, Ross LD. Naïve realism and affirmative action: Adversaries are more similar than they think. Basic and Applied Social Psychology. 2003;25(4):275–89. doi: 10.1207/S15324834BASP2504_2 2003-10340-002. [DOI] [Google Scholar]
  • 6.Maoz I, Ward A, Katz M, Ross L. Reactive devaluation of an "Israeli" vs. "Palestinian" peace proposal. Journal of Conflict Resolution. 2002;46(4):515–46. doi: 10.1177/0022002702046004003 [DOI] [Google Scholar]
  • 7.Liberman V, Minson JA, Bryan CJ, Ross L. Naïve realism and capturing the “wisdom of dyads”. Journal of Experimental Social Psychology. 2011. doi: 10.1016/j.jesp.2011.10.016 [DOI] [Google Scholar]
  • 8.Lieberman MD. Seeing minds, matter, and meaning: The CEEing model of pre-reflective subjective construal. Psychological review. 2022;129(4):830–72. doi: 10.1037/rev0000362 [DOI] [PubMed] [Google Scholar]
  • 9.Lord CG, Ross L, Lepper MR. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality & Social Psychology. 1979;37(11):2098–109. doi: 10.1037/0022-3514.37.11.2098 [DOI] [Google Scholar]
  • 10.Ross L, Greene D, House P. The false consensus effect: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology. 1977;13(3):279–301. [Google Scholar]
  • 11.Lakens D, Scheel AM, Isager PM. Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science. 2018;1(2):259–69. doi: 10.1177/2515245918770963 [DOI] [Google Scholar]
  • 12.Gehlbach H, Vriesema CC. Meta-bias: A practical theory of motivated thinking. Educational Psychology Review. 2019;31:65–85. doi: 10.1007/s10648-018-9454-6 [DOI] [Google Scholar]
  • 13.Gilovich T, Ross L. The wisest one in the room: How you can benefit from social psychology’s most powerful insights. First Free Press hardcover edition. ed. New York: Free Press; 2015. 307 p. [Google Scholar]
  • 14.Ross L, Amabile TM, Steinmetz JL. Social roles, social control, and biases in social-perception processes. Journal of Personality and Social Psychology. 1977;35(7):485–94. doi: 10.1037/0022-3514.35.7.485 [DOI] [Google Scholar]
  • 15.Ames DR. Inside the mind-reader’s toolkit: Projection and stereotyping in mental state inference. Journal of Personality and Social Psychology. 2004;87(3):340–53. doi: 10.1037/0022-3514.87.3.340 [DOI] [PubMed] [Google Scholar]
  • 16.Gilbert DT, Wilson TD. Prospection: Experiencing the future. Science. 2007;317(5843):1351–4. doi: 10.1126/science.1144161 [DOI] [PubMed] [Google Scholar]
  • 17.Gilbert DT, Wilson TD. Why the brain talks to itself: Sources of error in emotional prediction. Philosophical Transactions of the Royal Society. 2009;364:1335–41. doi: 10.1098/rstb.2008.0305 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Fiske ST. Social cognition. In: Tesser A, editor. Advanced social psychology. New York: McGraw-Hill; 1995. p. 145–94. [Google Scholar]

Decision Letter 0

Gal Harpaz

17 Jun 2024

PONE-D-24-13838

The illusion of information adequacy

PLOS ONE

Dear Dr. Gehlbach,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Dear Author

Greetings

Thank you for considering publishing in our journal. I read your article and found it interesting and valuable. It was transferred to an evaluation process by a reviewer.

After receiving a detailed response from the reviewer, I recommending that you incorporate adjustments and responses in the revised version of the article.

I recommend paying special attention to the comments about each part of the article, and attaching a reference to each comment with an explanation of how it was taken into account when submitting the revised version.

Best regards

Gal Harpaz PhD.

==============================

Please submit your revised manuscript by Aug 01 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Gal Harpaz, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The manuscript reports on a preregistered online survey study using data gained with Prolific. The authors investigated the psychological basis of the epistemic phenomenon of naive realism, in particular how a lack of information affects judgments in light of people's general impression of having enough information at their disposal. The authors varied the amount of information their participants received, with those receiving less information being confident of knowing enough about the issue. The data support the claim that people imagine others to arrive at similar conclusions while they were in fact strongly influenced in their decision by the selection of information they were presented with.

The study offers an insight into decision-making under limited information availability and resulting biases. I have multiple issues with the manuscript:

- The authors claim that their study revolves around a "local, hypothetical scenario—for which they could not have held previous beliefs." I do not share the assumption that participants read the scenario without any preexisting biases. Any merger is associated with several consequences, such as lay-offs, but also cost reductions. Nevertheless, this limitation apparently has not affected judgments substantially, but should still be discussed.

- In the case of equivalence t-tests, the authors should mention the exact name of the test they used, since multiple implementations are available.

- It is unclear from the manuscript why some treatment groups completed different measures than others. This is rather confusingly presented in Figure 1 and the authors do not explain why the names of some measures are italicized.

- Although the PLOS ONE data sharing guidelines specify that data must be shared with reviewers unless a reason is given, the authors only plan on making the data available after acceptance without any justification. This violates the requirements of the journal and would need to be fulfilled.

The Rumsfeld quote is very illustrative.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Oct 9;19(10):e0310216. doi: 10.1371/journal.pone.0310216.r002

Author response to Decision Letter 0


15 Jul 2024

Please see the appended "Response to Reviewers" letter.

Attachment

Submitted filename: InformationAdequacy_ResponseToReviews(7-12-24).docx

pone.0310216.s003.docx (29.5KB, docx)

Decision Letter 1

Gal Harpaz

22 Jul 2024

PONE-D-24-13838R1The illusion of information adequacyPLOS ONE

Dear Dr. Gehlbach,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Dear Author

Thank you for resubmitting the article, we have received the additional reviewer's comments back, and now we have some final issues that require your attention.

Please pay attention to the four comments, we will wait for a revised version and ask that you address each of the issues the reviewer commented on.

==============================

Please submit your revised manuscript by Sep 05 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Gal Harpaz, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Oct 9;19(10):e0310216. doi: 10.1371/journal.pone.0310216.r004

Author response to Decision Letter 1


15 Aug 2024

I have changed the titles of the three documents inquired about to exactly match the request in the email. Please note that the email asks, "Please pay attention to the four comments" but then only lists three bulleted items (requesting changes to the file names that were uploaded previously). If there is a fourth issue we need to address, please let us know.

Thanks,

Hunter

7/25/24

Please note my responses below to the latest email you sent on 7/24/24...

1. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in your cover letter; we will change the online submission form on your behalf.

I have noted in the response to reviewers' letter that I have done the best I can in the submission system, have been as accurate as possible, and need your specific guidance on how to proceed if you want something different.

2. Please note that your Data Availability Statement is currently missing the repository name. If your manuscript is accepted for publication, you will be asked to provide these details on a very short timeline. We therefore suggest that you provide this information now, though we will not hold up the peer review process if you are unable.

I'm not sure what is being asked for here. In our data availability statement in the submission system we state: "Our data and code are posted at: https://osf.io/k37dg/". Our response to reviewers contains this URL too. In the manuscript, you will see this on p. 6. I have added a DOI number and that the name of the hosting site is "Open Science Framework" just in case that was what you wanted?

3. Please change the highlighted text in your manuscript with black text.

I have looked through the submitted manuscript and do not see any highlighted text. I have changed all font to black in case that was what you wanted?

Please note that I am happy to make whatever changes you want but need more specificity. If there are any final requests from your end please list all of them--I know your time is as valuable as mine is.

Thank you,

Hunter

Attachment

Submitted filename: ResponseToReviewers.docx

pone.0310216.s004.docx (29.5KB, docx)

Decision Letter 2

Gal Harpaz

27 Aug 2024

The illusion of information adequacy

PONE-D-24-13838R2

Dear Dr. Gehlbach,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Gal Harpaz, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Dear Authors,

Thank you for resubmitting the article. It is interesting, important and very relevant for the present time. Regarding editorial matters, if necessary they contacted you on the matter.

Best regards

Reviewers' comments:

Acceptance letter

Gal Harpaz

13 Sep 2024

PONE-D-24-13838R2

PLOS ONE

Dear Dr. Gehlbach,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Gal Harpaz

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Initial confidence in decision by condition and 95% CIs.

    Comparison of the four different treatment groups versus the control group in their initial responses to the question, “How confident are you that your recommendation is the smartest action for the school board to take?”.

    (TIF)

    pone.0310216.s001.tif (76.8KB, tif)
    S2 Fig

    Panel A. Perceived Key Details Received by Condition and 95% CIs. Comparison of the four different treatment groups versus the control group in their responses to the question, “Sometimes people are satisfied that they have all the information that they need to make a decision. Other times, they feel that they need more information. After reading the article, to what extent do you feel that you understand enough of the key details of the situation to make a good decision?”, Panel B. Perceived Lack of Lingering Question by Condition and 95% CIs. Comparison of the four different treatment groups versus the control group in their responses to the question, “To what extent do you feel as though you still have questions about important details of Prairie View’s situation?”.

    (ZIP)

    pone.0310216.s002.zip (121.8KB, zip)
    Attachment

    Submitted filename: InformationAdequacy_ResponseToReviews(7-12-24).docx

    pone.0310216.s003.docx (29.5KB, docx)
    Attachment

    Submitted filename: ResponseToReviewers.docx

    pone.0310216.s004.docx (29.5KB, docx)

    Data Availability Statement

    All relevant data for this study are publicly available from the OSF repository (http://doi.org/10.17605/OSF.IO/Q2UFB).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES