Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 May 4;17(5):e0267360. doi: 10.1371/journal.pone.0267360

Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research

Charlotte E Dean 1,*, Shazia Akhtar 1, Tim M Gale 1, Karen Irvine 1, Dominique Grohmann 1, Keith R Laws 1
Editor: José C Perales2
PMCID: PMC9067702  PMID: 35507572

Abstract

Background

Research into paranormal beliefs and cognitive functioning has expanded considerably since the last review almost 30 years ago, prompting the need for a comprehensive review. The current systematic review aims to identify the reported associations between paranormal beliefs and cognitive functioning, and to assess study quality.

Method

We searched four databases (Scopus, ScienceDirect, SpringerLink, and OpenGrey) from inception until May 2021. Inclusion criteria comprised papers published in English that contained original data assessing paranormal beliefs and cognitive function in healthy adult samples. Study quality and risk of bias was assessed using the Appraisal tool for Cross-Sectional Studies (AXIS) and results were synthesised through narrative review. The review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and was preregistered as part of a larger registration on the Open Science Framework (https://osf.io/uzm5v).

Results

From 475 identified studies, 71 (n = 20,993) met our inclusion criteria. Studies were subsequently divided into the following six categories: perceptual and cognitive biases (k = 19, n = 3,397), reasoning (k = 17, n = 9,661), intelligence, critical thinking, and academic ability (k = 12, n = 2,657), thinking style (k = 13, n = 4,100), executive function and memory (k = 6, n = 810), and other cognitive functions (k = 4, n = 368). Study quality was rated as good-to-strong for 75% of studies and appears to be improving across time. Nonetheless, we identified areas of methodological weakness including: the lack of preregistration, discussion of limitations, a-priori justification of sample size, assessment of nonrespondents, and the failure to adjust for multiple testing. Over 60% of studies have recruited undergraduates and 30% exclusively psychology undergraduates, which raises doubt about external validity. Our narrative synthesis indicates high heterogeneity of study findings. The most consistent associations emerge for paranormal beliefs with increased intuitive thinking and confirmatory bias, and reduced conditional reasoning ability and perception of randomness.

Conclusions

Although study quality is good, areas of methodological weakness exist. In addressing these methodological issues, we propose that authors engage with preregistration of data collection and analysis procedures. At a conceptual level, we argue poorer cognitive performance across seemingly disparate cognitive domains might reflect the influence of an over-arching executive dysfunction.

Introduction

The term “paranormal” typically refers to phenomena, such as psychokinesis, hauntings, and clairvoyance, which contradict the basic limiting principles of current scientific understanding [1]. Surveys consistently indicate paranormal beliefs are prevalent within the general population. For example, a representative survey of British adults conducted by the market-research company BMG Research [2] found that a third of their sample believed in paranormal phenomena, and a further 21% were ‘unsure’. Of those who either believed in the paranormal or were unsure, 40% indicated they had seen or felt the presence of a supernatural entity. Similarly, Pechey and Halligan [3] found 30% of participants held at least one strong paranormal belief, and 79% held at least one paranormal belief at any strength (weak, moderate, or strong belief). Comparable levels of belief have been documented across various cultures over recent decades [47].

The most frequently used scales to measure paranormal beliefs include Tobacyk’s Paranormal Belief Scale in both original (PBS) [8] and revised form (RPBS) [9], and the Australian Sheep-Goat Scale (ASGS) [10]. Despite widespread use, some concerns exist about both the content and the factor structures of these measures [1113]. Nonetheless, both the RPBS and ASGS have demonstrated excellent internal reliability, with Cronbach’s alpha values around .93 for the RPBS [1416], and around .95 for the ASGS [17, 18].

Scores on paranormal belief measures have been linked to various personal and demographic characteristics. For example, higher belief scores have been noted for individuals high in extraversion and neuroticism [1921], while lower belief scores have been seen for those with higher levels of education [2224]. Paranormal belief levels also appear to vary across academic disciplines; with those engaged in hard (or natural) sciences, medicine, and psychology showing significantly lower paranormal belief scores than those in education, theology, or artistic disciplines [25, 26]. Higher levels of paranormal beliefs have been documented in women and younger individuals [2732], though these sex and age effects are inconsistently reported [33] and have generated substantial debate [3436].

Paranormal beliefs and cognitive function

The association between cognitive functioning and paranormal beliefs has been researched over several decades. Such functions include memory, attention, language, and executive function (the umbrella term used to describe set-shifting ability, inhibitory control, and working memory updating; for a full description of executive function, see Miyake et al.’s work [37]).

As important for cognitive function is an individual’s belief system. Religious and spiritual beliefs have been associated with slower cognitive decline in older adults [38, 39] but have also been shown to have an inverse relationship with memory performance [40] and intelligence [41, 42]. Similarly, so-called “epistemically unwarranted beliefs” [19], which includes belief in conspiracy theories, has been linked with lower educational attainment and reduced analytical thinking [43, 44]. Conspiracist beliefs are similarly associated with increased illusory pattern perception [45, 46], decreased need for cognition and cognitive reflection [4749], biases against confirmatory and disconfirmatory evidence [50], and hindsight bias (for discussions on this topic see [5153]).

The last published review to examine the relationships between paranormal beliefs and various aspects of cognition was conducted by Irwin in 1993 [53]. That non-systematic narrative review of 43 studies is now almost 30 years old and may have introduced bias by “…citing null results only when these form a substantial proportion of the available data on a given relationship” (p.6). At the time of his review, Irwin [53] concluded that, owing to the variable findings, support for the cognitive deficits hypothesis remained uncertain.

Research has grown considerably since Irwin’s [53] review and an updated and systematic review is timely. The current review has two key aims: first, to provide the first assessment of study quality [54] in this area and second, to systematically review and summarise key associations between paranormal beliefs and a range of cognitive functions.

Method

This review was conducted within the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [55] (see S2 Appendix for PRISMA checklist). The systematic review was preregistered at the Open Science Framework (OSF; https://osf.io/uzm5v) as part of a larger study (also assessing the relationships between paranormal beliefs and schizotypal personality traits). Data used for the descriptive and inferential analyses presented in the results section are available at the OSF preregistration. One author (CED) conducted the search strategy, article eligibility assessment, and data extraction.

Search strategy

A systematic literature review was chosen for this area owing to its strength as a method to synthesise relevant evidence from large bodies of research [56, 57]. Our searches included both peer-reviewed articles published in scholarly journals and “grey literature” (concerning unpublished works such as doctoral theses).

We searched the electronic databases Scopus, ScienceDirect, SpringerLink, and OpenGrey from inception to May 2021. Our search terms were: (1) “paranormal belief” AND cogni*, (2) “paranormal belief” AND thinking, and (3) “paranormal belief” AND (memory OR “executive function”). For databases that did not permit wildcard Boolean operators (ScienceDirect), one of the above search terms was amended and entered as: “paranormal belief” AND (cognition OR cognitive), to best replicate the effect of the Boolean operator. Following exclusion of duplicate articles across databases, titles and abstracts were assessed to identify studies relevant to the review. Full-text assessment of eligible studies was performed to determine final inclusion. Full-text copies were unavailable for five studies, which were subsequently sought for retrieval. Finally, we hand-searched reference lists for each included article to identify any additional relevant articles. The PRISMA flow diagram presented in Fig 1 illustrates the full screening and selection process. The PRISMA checklist for abstracts is presented in S1 Appendix, and the full PRISMA checklist is presented in S2 Appendix.

Fig 1. PRISMA flow diagram.

Fig 1

Inclusion/Exclusion criteria

Studies were eligible for inclusion if they were: published in the English language, conducted with a healthy adult sample (age 18 or over) and presented original data involving both a measure of paranormal belief and a measure of cognitive function. As cognitive functions have been shown to peak at different ages (for a detailed discussion on this topic, see [58]), we excluded samples that included children and adolescents under the age of 18 as some cognitive functions are still developing in these younger individuals.

Data extraction

We used a detailed data extraction form to collate the following information from included studies: sample sizes and demographic details (including sex, age and education), the measures of self-rated paranormal belief, the aspect of cognition assessed, the tests of cognitive functions used, and findings relating to the relationship between paranormal beliefs and cognitive function. We categorised eligible outcome measures broadly to include both global cognitive function and domain-specific cognitive functions. Any measure of cognitive function was eligible for inclusion (e.g., neuropsychological tests, self-report measures). Results for both paranormal beliefs and cognitive functioning could be reported as an overall test score that provides a composite measure, subscale scores that provide domain-specific measures, or a combination of the two. When multiple cognitive outcomes were investigated, we included all measures. To assess the strength of the relationships between paranormal beliefs and various cognitive functions, we calculated the number of positive, negative, or null findings reported by each study included in the review. Measures of paranormal belief were examined to determine the extent to which established questionnaires have been used.

In line with our preregistered protocol, we synthesised evidence narratively. Meta-analyses could not be undertaken because of the heterogeneity of study designs and outcome measures. We did, however, develop summary tables that include information relating to: sample size, gender composition, mean sample age, cognitive domain, outcome measure, and key findings. Given the range of outcome measures, we attempted to categorise the included studies by common cognitive domains. As the review took an explorative approach, and did not specify domains of interest, categorisation took place after full-text evaluation of included studies.

Results

Electronic and hand searches identified 902 papers, of which 475 were unique. Most articles (k = 391) were excluded from the review following title and abstract screening, leaving 84 eligible for full-text evaluation. We removed 13 studies that included participants under the age of 18 (see S1 Table for details of these studies). Seventy-one papers met our inclusion criteria (see Fig 1), which included 70 published between 1980 and 2020 and one unpublished doctoral thesis [59].

Assessment of study quality and risk of bias

The preregistration for this review specified using a bespoke series of questions to assess study quality, but we subsequently decided to use a more well-established and validated measure of study quality in the Appraisal tool for Cross-Sectional Studies (AXIS) tool [60]. Of the 20 AXIS items, seven assess reporting quality (items: 1, 4, 10, 11, 12, 16 and 18), seven relate to study design (items: 2, 3, 5, 8, 17, 19 and 20), and six to possible biases (items: 6, 7, 9, 13, 14 and 15). Two authors (DG and CED) independently rated each study, and these two sets of ratings had almost-perfect agreement (93%) with Kappa = .84.

Following previous research [61], we classified AXIS quality scores according to the number of "Yes" responses for the 20 items for each study—poor quality for scores <50%, fair quality for scores between 50 to 69%, good quality for scores of 70% to 79%, strong quality for scores of 80% and higher. Three in four studies were rated as either ‘strong’ (26/71: 37%) or ‘good’ (27/71: 39%). By contrast, 17/71 (24%) were rated as ‘fair’ and only 1/71 (1%) was rated as ‘poor’. The mean quality rating score across all 71 studies was in the ‘good’ range; however individual AXIS items are not weighted and so this total score provides a general, but limited, classification that should be interpreted with some caution. The number of papers meeting each AXIS criterion (‘Yes’) is presented in Table 1. The number of papers meeting the criteria for each AXIS domain (reporting quality, study design quality, and potential biases) is presented in Figs 24 respectively.

Table 1. Total number of “yes”, “no” and “unsure” responses for each AXIS item.

AXIS Item Yes No Unsure
Introduction
1 Were the aims/objectives of the study clear? 71 0 0
Methods
2 Was the study design appropriate for the stated aim(s)? 71 0 0
3 Was the sample size justified? 5 66 0
4 Was the target/reference population clearly defined? (Is it clear who the research was about?) 68 3 0
5 Was the sample frame taken from an appropriate population base so that it closely represented the target/reference population under investigation? 22 49 0
6 Was the selection process likely to select subjects/participants that were representative of the target/reference population under investigation? 31 29 11
7 Were measures undertaken to address and categorise non-responders? 19 0 52
8 Were the risk factor and outcome variables measured appropriate to the aims of the study? 71 0 0
9 Were the risk factor and outcome variables measured correctly using instruments/measurements that had been trialled, piloted or published previously? 65 6 0
10 Is it clear what was used to determine statistical significance and/or precision estimates? (e.g. p-values, confidence intervals) 68 3 0
11 Were the methods (including statistical methods) sufficiently described to enable them to be repeated? 69 2 0
Results
12 Were the basic data adequately described? 66 5 0
13 Does the response rate raise concerns about non-response bias? 7 12 52
14 If appropriate, was information about non-responders described? 1 18 52
15 Were the results internally consistent? 71 0 0
16 Were the results presented for all the analyses described in the methods? 71 0 0
Discussion
17 Were the authors’ discussions and conclusions justified by the results? 71 0 0
18 Were the limitations of the study discussed? 42 29 0
Other
19 Were there any funding sources or conflicts of interest that may affect the authors’ interpretation of the results? 0 14 57
20 Was ethical approval or consent of participants attained? 37 0 34

Fig 2. AXIS reporting quality summary for the 71 papers included in the review.

Fig 2

Fig 4. AXIS possible biases summary for the 71 papers included in the review.

Fig 4

Fig 3. AXIS study design quality summary for the 71 papers included in the review.

Fig 3

All studies scored positively for items concerning: clear objectives, appropriate study design, appropriate measurement of outcome variables, internal consistency of presented results, and appropriate conclusions justified by the results. Study quality correlated with year of publication (r = .64, p < .001), and appears to be improving with time (see Fig 5). Nonetheless, three main areas for study quality improvement were highlighted throughout the AXIS assessment: sample size justification, nonrespondents, and discussion of limitations.

Fig 5. AXIS study quality (maximum = 20) by year of publication.

Fig 5

Sample size justification, sample representativeness and open science

Only 5 of 71 (7%) papers included a-priori power analyses to justify their sample sizes. Although power analyses are rarely conducted in this research area, the mean sample size is large at 211 (median = 124), suggesting that both simple correlational and between-subject comparisons are well-powered to detect large (.99 and .98), moderate (.94 and .88) and potentially for small effect sizes (.72 and .72)–large, moderate and small effects being 0.7, 0.5 and 0.2 respectively [62]. Despite this, many studies have assessed multiple outcomes and/or multiple metrics derived from the same tests and so, a simple power analysis will mislead. As a rough metric on this issue, we calculated the number of p-values presented in the results section for each of the 71 papers. This revealed a mean number of p-values per study of 43 (median = 30) with a range from 1 [63] to over 200 [64]. So, despite relatively large samples, the possibility of type-1 errors remains high, especially when studies fail to adjust alpha levels for high levels of multiple testing. Only 12/71 studies employed some correction; eleven used a Bonferroni correction [15, 25, 6472], and one used the Newman–Keuls adjustment [73]. Those studies that adjusted alpha levels tended to report more p-values than those that did not adjust (means 57 vs. 40). So, adjustment was made in fewer than one-in-five studies, most being published recently.

Despite good-strong quality ratings, some core features of open science practice including preregistration have yet to be embraced in this literature. Admittedly, we are assessing forty years of research and preregistration is a relatively recent innovation in psychology. Nonetheless, the Open Science Framework (OSF) began in 2013 as a repository for preregistrations–so potentially up to half of the 71 studies could have preregistered, yet only 2 (<3%) have done so [71, 74], with both published in 2020. The issue about preregistration is fundamental in this area of research. First, studies are characterised by large numbers of analyses often involving multiple outcome measures and/or multiple metrics derived from smaller numbers of tests. We have also seen that up to one-third of studies (25/71) have assessed relationships between cognitive function and paranormal test subscale scores (often with few items). This approach consciously or unconsciously increases the likelihood of reporting bias and HARKing (hypothesizing after results are known), often perhaps with little chance of, or interest in, replicating such findings (see Laws [75] for a discussion). Second, the preregistration of future trials will also help to assess whether null results remain unpublished. Third, preregistration would identify both the primary outcome and the sample size required to achieve an acceptable level of statistical power. Ironically, the lack of attention to pre-registration and justifying sample sizes contrasts with research on paranormal phenomena, where study registration and a priori power calculations have been employed for many years [76].

Representativeness

Another issue concerns the sampling frame and its representativeness. Almost two-thirds of all samples are undergraduates (45/71: 63%) and of those, 21 (30%) consisted wholly of, or a majority of, psychology undergraduates. Only one-third of all samples consisted of: non-undergraduates (15/71: 21%), mixed undergraduate and general population samples (8/71: 11%) or other non-undergraduate samples (2/71: 3%). One non-undergraduate study by Blackmore in 1997 [77] consisted of a national newspaper-based study (Daily Telegraph) and recruited an exceptionally large sample (n = 6238). If we exclude this outlier, then 60% of all participants in the 70 remaining studies have been completely (k = 41) or majority undergraduate (k = 5) samples, with 16 involving only psychology graduates. Amongst the non-undergraduate samples, this includes visitors to a paranormal fair [29, 66], members of the Society for Psychical Research [78], Mechanical Turk participants [79], and some used Crowdflower, a crowdsourcing website [64, 80, 81]. So, even the non-undergraduate samples may not necessarily represent the wider population (see Stroebe et al. [82] for a discussion). Studies testing undergraduates and non-undergraduates did not differ in mean sample size (196 vs 215, with the exclusion of Blackmore [77], t(68) = .29, p = .78, d = .08) or in quality ratings (14.73 vs 15.19: t(69) = -.90, p = .37: d = .23). The profile of sampling, however, is pertinent because paranormal beliefs are inversely related to educational levels [2224], and those studying sciences, medicine, and psychology exhibit lower levels of paranormal beliefs [25, 26]. Such samples are unrepresentative and may bias findings because they may combine lower levels of paranormal beliefs and higher cognitive functioning than occurs in the general population.

In addition to samples comprising more highly educated university students, most participants are female (>60%). The importance of this latter aspect of sampling is underscored for at least two reasons. First, some authors have documented greater levels of paranormal beliefs in women [2732]. Indeed, the last literature review by Irwin in 1993 [53] stated that “the endorsement of most, but certainly not all, paranormal beliefs is stronger among women than among men” (p.8). Second, gender (and age) effects are not consistently reported [33] and have resulted in substantial debate [3436]. This debate largely results from differences in psychological test theories (see Dean et al. 2021 [83] for a discussion). Classical test theory—used to develop common paranormal belief measures, such as the RPBS—does not test for the presence of differential item functioning (DIF). DIF refers to when individuals with the same latent ability (e.g., paranormal beliefs), but from different groups, have an unequal probability of giving a response. By contrast, modern test theory, including the use of Rasch scaling, can produce unbiased interval measures focused on the hierarchical properties of questionnaire items. This has resulted in the revision of older paranormal belief measures using modern test theory, to create scales that accurately capture fluctuations in levels of belief rather than differences in item functioning [84, 85]. When these problematic items are removed from scales such as the RPBS and ASGS, paranormal belief scores are no longer associated with sex, but small differences remain for age [84, 85]. Although these effect sizes seem to be small (e.g., 0.15 [84], identified by Cohen [62] as a small effect size), they are more likely to reflect a true and meaningful fluctuation in paranormal belief levels, compared to findings reported using scales developed through classical test theory.

Nonrespondents

Most studies (52/71) failed to state whether measures were undertaken to address and categorise nonrespondents. As such, response rates and risk of nonresponse bias could not be calculated. Nonresponse bias arises when respondents differ from nonrespondents beyond sampling error and may reduce external validity [86, 87]. Survey-based approaches are at a greater risk of nonresponse bias owing to their high nonresponse rates, with those relying on self-administered online surveys suffering from higher nonresponse rates than those using face-to-face methods [88]. Most studies have been conducted in face-to-face settings (k = 59), however the past few years has seen a rise in online data capture (k = 12). Compared to face-to-face studies, online studies rated more highly on study quality (16.50 vs 14.49: t(69) = -3.87, p < .001, d = 1.32) and had larger mean sample sizes (482 vs 155: t(11.83) = -3.12, p = .008, d = -1.69, equal variances not assumed), but also report larger numbers of statistical comparisons (96.42 vs 31.58,: t(12) = -3.47, p = .005, d = 1.33, equal variances not assumed).

Of the 19 papers that did provide nonresponse rates, seven had response rates < 70% and so raise concerns about potential nonresponse bias [89]. Only one of 19 papers [90] presented any information about nonrespondents, reporting that they had marginally lower educational attainment than respondents. Similar findings for nonrespondents have been reported in other research areas [9194]. Finally, we note that online studies more often have records of nonrespondents. Guidance has been developed on detailing non-response details in online survey-type studies e.g., the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) [95] and should routinely be reported.

Limitations

Surprisingly, up to 40% of the included papers (29 of 71) did not include a discussion of study limitations. Discussion of study limitations forms a fundamental part of scientific discourse and is crucial for genuine scientific progress, allowing a reader to contextualise research findings [96]. The failure to discuss limitations might be viewed partly as a failure of the peer review process [97], but responsibility ultimately resides with authors. Detailing limitations allows other researchers to consider methodological improvements, identify gaps in the literature and has an ethical element by aiding research transparency. The inclusion of limitations not only helps increase research quality, but facilitates directions for future research and crucially, replications.

Quality summary

Of the 71 studies published since 1980, three-quarters were rated as ‘good’ or ‘strong’ in quality, and only one received a ‘poor’ quality rating. Indeed, study quality also indicates a continuous improvement in study quality across four decades of research. Despite the high levels of study quality and evidence of improving quality, we identified areas of methodological weakness: justifying sample size, providing more detail about non-respondents, and discussing study limitations.

One issue of note is the sampling, where almost two in three studies have relied on exclusively undergraduate samples (46/71: 65%), with many being psychology undergraduates. Future recruitment needs to move beyond the highly educated and address the bias towards female participants. Despite recruiting large samples, studies use large numbers of analyses, with a mean of 43 p-values reported in results sections, and rarely report appropriate adjustment of significance levels (12/71: 17%). These methodological issues are compounded by the fact that so few studies pre-register their primary hypotheses and analyses in advance (2/71: 3%).

Cognitive functioning

The 71 studies were grouped into six sections: (1) perceptual and cognitive biases, (2) reasoning, (3) intelligence, critical thinking, and academic performance, (4) thinking style, (5) executive function, and (6) other cognitive functions. Whenever possible, categories were classified according to the focus identified by the authors in each study. Such classifications are necessarily a simplification and not intended to provide a definitive organisation. Moreover, many studies could receive multiple classifications owing to the breadth of testing conducted (see S9 Table). In this context, S9 Table shows that two in three (48/71) studies might be classified as assessing executive function.

Articles presented in the first section (perceptual and cognitive biases) included scenarios aimed at measuring cognitive biases towards confirmatory evidence, and the impact of visually degraded stimuli on biases in perceptual decision-making. Examples of tasks used in the second section (reasoning) include the mental dice task [63] aimed at measuring probabilistic reasoning, and the Reasoning Tasks Questionnaire (RTQ) [98] to assess both probabilistic and conditional reasoning. Studies in the third category (intelligence, critical thinking, and academic performance) included published measures such as the Watson-Glaser Critical Thinking Appraisal (WGCTA) [99] and variations of Raven’s matrices (e.g., the Advanced Progressive Matrices Test [100]; Raven’s Progressive Matrices [101], and measures of academic achievement such as grade point average. In the fourth section (thinking style), papers used measures such as the Rational Experiential Inventory (REI) [102] and the Cognitive Reflection Test [103], aimed at assessing intuitive and analytical thinking. Studies in the fifth section (executive function and memory) included tasks such as the Deese-Roediger-McDermott task (DRM) [104] and the Wisconsin Card Sorting Test [105, 106]. The final cognitive section (other cognitive functions) included tasks to measure indirect semantic priming (using prime-target word pairs) and implicit sequence learning.

Perceptual and cognitive biases

Nineteen articles (n = 3,397) assessed perceptual and cognitive biases. Perceptual decision-making with high visual noise stimuli has produced inconsistent findings (k = 7). For example, in 2014 Simmonds-Moore [67] found believers made more misidentifications of degraded black and white images of objects and animals (e.g., shark, umbrella), despite having faster response latencies than sceptics (suggesting a potential speed-error trade-off, with believers favouring speed over accuracy). By contrast, Van Elk [66] found sceptics mis-categorised degraded black and white images of face stimuli as houses more frequently than believers. The findings from both studies, however, contradict those from Blackmore and Moore’s 1994 study [107], which reported no difference in the accurate identification of degraded monochrome images for believers and sceptics.

Two studies assessed perceptual decision-making relating to faces within degraded and artifact stimuli. Using black and grey images of faces and “nonfaces” (scrambled eyes-nose-mouth configurations), Krummenacher and colleagues [73] found believers made significantly more Type I errors than sceptics, favouring “false alarms” over “misses” (i.e., believers had a lower response criterion when classifying images as faces, with a bias towards “yes” responses). Similarly, Riekki et al. [108] presented participants with 98 artifact face pictures (containing a face-like area where eyes and a mouth could be perceived, e.g., a tree trunk) and 87 theme-matched non-face pictures (e.g., a tree trunk with no face-like areas). Believers rated the non-face pictures as more face-like and assigned more extreme positive and negative emotions to non-faces than sceptics.

A study conducted by Caputo [109] employed the strange-face illusion paradigm, in which pairs of participants are instructed to gaze into each other’s eyes for 10 minutes in a dimly-lit room. This paradigm induces the experience of seeing face-related illusions and is assessed on a self-report measure (Strange Face Questionnaire; SFQ [110]). No association was found for paranormal beliefs and the experience of strange-face illusions. A final study of perceptual decision-making conducted by Van Elk [111] used point-light-walker displays (an animated-point-set of 12 points, representing a human walking on a treadmill), randomly scrambling the location of each individual dot across the display; and participants had to detect if a human agent was present. Paranormal believers were more prone to illusory agency detection than sceptics, being biased towards ‘yes’ responses when no agent was present.

Cognitive biases have been assessed in 11 papers. These include reports of significant associations between paranormal belief and illusion of control or differences in causation judgements [65, 112114] and risk perception [115]. Two studies, however, report no significant relationships [29, 116]. Further work shows that paranormal beliefs positively correlated with biases towards: anthropomorphism, dualism, teleology, and mentalising, but were not predicted by mentalising [15].

Proneness to jump to conclusions was assessed by Irwin and colleagues [68] using a computerised task [117]. Participants were informed of proportions of beads in two jars (e.g., 70 black and 30 red beads in jar one, but 30 black and 70 red beads in jar two), then shown a sequence of beads drawn one at a time from one of the jars and asked to identify whether beads were drawn from jar one or two, and to indicate when they are certain. Those who require fewer draws before being certain of their decision are identified as being prone to “jump to conclusions”. A significant negative correlation emerged for jumping to conclusions, but only with the Traditional Religious Beliefs (TRB) subscale of the Rasch-devised RPBS [85]. A significant positive correlation was also found between TRB scores and self-report indices of jumping to conclusions as measured with the Cognitive Biases Questionnaire [118, 119] (e.g., “imagine you hear that a friend is having a party and you have not been invited”, 1 = little or no inclination to jump to a premature conclusion, 2 = inclination to make a cautious inference, 3 = inclination to jump to a dramatic inference).

Prike et al. [64] assessed proneness to jumping to conclusions using both a neutral (beads task) and an emotional draws-to-decision task (where participants decide whether positive or negative words are more likely a description of “Person A” or “Person B”–for a full description see Dudley et al.’s work [120]). Participants also saw a series of 24 scenarios to assess bias towards confirmatory and disconfirmatory evidence, as well as liberal acceptance. Each scenario consisted of three statements presented one at a time, e.g., (a) “Eric often carries binoculars with him”, (b) “Eric always has an unpredictable schedule”, (c) “Eric tries to solve mysteries”. Participants rated the likelihood of the same four response options after each statement, e.g., (a) “Eric is a private detective”, (b) “Eric is a bird expert”, (c) “Eric is a stalker”, (d) “Eric is an astronaut”. Each scenario presented an absurd interpretation (implausible for all three statements), a neutral lure, an emotional lure, and a true interpretation (less or equally as plausible as the lure options after the first statement but became the most plausible by the third statement). Paranormal beliefs were related to both disconfirmitory and confirmatory biases, but not to jumping-to-conclusions. Liberal acceptance predicted belief in the paranormal, but not after controlling for delusion proneness (as measured by the Peters et al. Delusions Inventory; PDI [121]). Lesaffre et al. [122] exposed participants to a magic performance and asked whether it was accomplished through: (1) paranormal, psychic, or supernatural powers, (2) ordinary magic trickery, or (3) religious miracles. Confirmation bias (i.e., explaining the magic performance in terms of paranormal powers) was associated with higher levels of paranormal beliefs. Barberia and colleagues [123] demonstrated that educating participants about confirmatory bias reduced scores on the Precognition subscale of the RPBS (but did not reduce global belief scores).

Summary

The studies assessing perceptual and cognitive biases are somewhat inconsistent regarding perceptual decision-making errors in response to degraded or ambiguous stimuli. Of the studies exploring perceptual decision-making, four suggest an inverse relationship between paranormal belief and perceptual decision-making, two found no relationship, and one reported more perceptual decision-making errors from sceptics. Results show greater consistency when perceptual decision-making tasks involve identifying a human face/agent (rather than inanimate objects or animals), with believers making significantly more false-positive misidentifications than sceptics. In the 11 studies exploring cognitive biases, paranormal believers show a consistent bias towards both confirmatory and disconfirmatory evidence. The evidence that paranormal belief links to the tendency to “jump to conclusions” is weaker, but only two studies present findings related to this outcome.

Reasoning

Seventeen papers have focussed on reasoning ability (n = 9,661), with the majority (12/17) reporting significant inverse relationships with paranormal beliefs and probabilistic reasoning. Perception of randomness and the conjunction fallacy have also been associated with paranormal beliefs on tasks with both neutral and paranormal content [69, 80, 124128].

In 2007, Dagnall et al. [126] presented 17 reasoning problems across four categories: perception of randomness, base rate, conjunction fallacy, and probability. Perception of randomness problems required participants to determine the likelihood of obtaining particular strings (e.g., “Imagine a coin was tossed six times. Which pattern of results do you think is most likely? (a) HHHHHH, (b) HHHTTT, (c) HTHHTT, (d) all equally likely”). Performance on these problems significantly predicted paranormal belief, with believers making more errors than sceptics. No significant differences or predictive effects emerged for the three other problem categories. In a later study, Dagnall and colleagues [127] presented 20 reasoning problems across five categories of: perception of randomness, base rate, conjunction fallacy, paranormal conjunction fallacy, and probability. The authors again reported perception of randomness to be the sole predictor of paranormal beliefs, with high belief associated with fewer correct responses. While these papers report no effects in relation to conjunction fallacy, Rogers et al. [128] demonstrated a significant main effect of paranormal belief on conjunction errors, with believers making more errors than sceptics. In later studies, both Prike et al. [80] and Rogers et al. [129] reported an association between paranormal belief and conjunction fallacy, but this association was only significant for scenarios with confirmatory outcomes in the latter study.

Probabilistic reasoning ability has been consistently associated with paranormal beliefs across five studies. In one paper [130], participants received a probabilistic reasoning test battery comprised of six tasks. For example, one task was a variant of the birthday paradox (from Blackmore and Troscianko [97]), in which participants are asked: “How many people would you need to have at a party to have a 50:50 chance that two of them will have the same birthday (regardless of year of birth)”. Possible answers for this task were 22 (correct), 43, or 98. Significant positive correlations emerged between paranormal beliefs and errors on three of the six tasks (dice sequences, dice throws, and sample size estimates). In the second study [63], participants received written descriptions of two hypothetical events: throwing 10 dice once to get 10 sixes and throwing one die 10 times to get 10 successive sixes; and had to identify whether one event was more probable or both equally probable. The authors reported 64% of believers and 80% of sceptics correctly identified that both events were equally probable. Brugger et al. [131] assessed differences in repetition avoidance between believers and sceptics on a mental dice task (where participants imagined throwing a die and had to write down the number they imagined being on top of the die), finding significantly fewer repetitions in believers than sceptics. Similarly, Bressan et al. [132] used a probabilistic reasoning questionnaire with problems concerning the comprehension of sampling issues, sensitivity to sample size, representative bias (as applied to sample size or random sequences) and the generation of random sequences. Believers made more probabilistic errors on two of four generation of random sequences problems: (1) simulated coin toss problem, in which participants were asked to fill in 66 empty cells by writing ‘H’ (heads) or ‘T’ (tails) randomly to make a resulting sequence that was indistinguishable from that of an actually tossed coin), and (2) an adapted version of Brugger et al.’s [131] mental dice task. Finally, Blackmore [77] asked participants whether a list of 10 statements (as might be produced by a psychic, e.g., “there is someone called Jack in my family”) were true for them, and to estimate the number of these statements that might be true for a stranger in the street. The number of ‘true’ statements was greater for believers than sceptics (significantly on five of the ten questions), however no significant differences emerged when estimating the number of statements true for a stranger.

The final four papers in this section found non-significant correlations between paranormal belief and probabilistic reasoning, but significant correlations with conditional reasoning tasks. Using the Reasoning Tasks Questionnaire (RTQ) [97], one study [4] found neither probabilistic reasoning nor neutral conditional reasoning were associated with paranormal beliefs. However, conditional reasoning was associated with paranormal beliefs when conditional reasoning tasks contained paranormal content rather than neutral content, with believers making fewer errors on these tasks. The second paper [133] measured reasoning using a test that combined probabilistic reasoning questions (seven in total, four of which were derived from the RTQ), conditional reasoning questions with abstract content (e.g., “if C is true, then D will be observed. D is observed. Therefore, C is true: True or False?”), and conditional reasoning questions with paranormal content (e.g., “if people are aware of hidden objects, then clairvoyance exists. People are aware of hidden objects. Therefore, clairvoyance does exist: True or False?”). Overall, paranormal beliefs correlated negatively with reasoning ability and conditional reasoning ability, but not with probabilistic reasoning ability. When comparing the two types of conditional reasoning questions, the authors reported no difference between the correlations for paranormal beliefs and either the abstract or paranormal conditions. Following a similar format, Wierzbicki [134] assessed reasoning ability using 16 conditional reasoning statements with either parapsychological or abstract content, finding paranormal belief scores and number of reasoning errors correlated positively. The final paper in this section [78] employed 32 statements conditional reasoning statements and found participants with strong paranormal beliefs made more reasoning errors than those with weak paranormal beliefs.

Summary

In general, evidence suggests paranormal beliefs are associated with poorer reasoning, however this line of research is characterised by inconsistent findings. Two studies report that the perception of randomness is a significant predictor of paranormal belief and provide some evidence of replicability [126, 127]. Despite this, evidence regarding the association between paranormal belief and the conjunction fallacy are conflicting, with two studies [127, 128] reporting no effect, and three [80, 128, 129] reporting significant associations. This may be due, in part, to the different statistical techniques used within each study, as those reporting no effect [126, 127] used multiple regression analyses with all probabilistic tasks entered as predictor variables, while studies reporting significant associations [80, 128, 129] only included conjunction fallacy tasks in their predictive models. Similar inconsistency emerges for probabilistic reasoning, with nearly equal numbers of studies reporting significant and nonsignificant associations with paranormal beliefs.

Intelligence, critical thinking, and academic performance

Twelve studies explored intelligence, critical thinking, and academic performance (n = 2,657). Seven papers focused on critical thinking ability, with two finding significant reductions in paranormal belief following a course in critical thinking [70, 135]. Alcock and Otis’ 1980 study [136] employed the Watson-Glaser Critical Thinking Appraisal (WGCTA) [137] significantly higher levels of critical thinking ability in sceptics than believers. In 1998, Morgan and Morgan [138] conducted a similar study, measuring critical thinking using a revised version of the WGCTA [98], finding significant negative correlations between critical thinking ability and three subscales of the PBS (Superstition, Traditional Religious Belief, and Spiritualism). No significant correlation between paranormal belief and critical thinking emerged in the remaining three papers [139141]. One did, however, report significant negative correlations between reasoning ability (measured using the Winer Matrizen-Test [142]) and three subscales of the PBS: Traditional Paranormal Beliefs, Traditional Religiosity, and Superstition [139].

The links between paranormal beliefs and academic achievement, or general intelligence are both mixed and weak. Two papers report significant negative correlations, one between overall paranormal belief scores and mean academic grade [25] and one between grade point average and the Witchcraft and Superstition subscales of the PBS [143]. Turning to intelligence, Betsch et al. [71] found a significant inverse relationship between IQ and paranormal beliefs, but only when controlling for sex, supporting similar findings from Smith et al.’s 1998 study [144] which reported a significant negative correlation between paranormal beliefs and intelligence (using the Advanced Progressive Matrices Test, Set 1 [100]). Nevertheless, two studies found no association between paranormal beliefs and intelligence. Royalty [141] used the information subtest of the Wechsler Adult Intelligence Scale [145] as an estimate of full-scale IQ, and the vocabulary subtest of the Multidimensional Aptitude Battery [146] as a measure of verbal intelligence. Stuart-Hamilton et al. [147] found no relationship with fluid intelligence using Raven’s Progressive Matrices [101]; however, this sample were older (mean age of 71).

Summary

Conflicting findings emerge from studies of intelligence, critical thinking, and academic performance, with an almost equal number of significant and non-significant associations to paranormal beliefs. Some of this heterogeneity, however, appears to reflect whether studies used crystallised or fluid intelligence tasks and the age of the sample (e.g., Stuart-Hamilton et al. [147] failed to find a relationship between fluid IQ and paranormal beliefs in an older sample, but Smith et al. [144] found a significant negative association in a younger sample). The precise relationship of paranormal belief with intelligence requires further investigation, both by considering the age of the sample and assessing relationships with fluid and crystallised intelligence separately.

Thinking style

Thirteen studies (n = 4,100) examined aspects of thinking style. One consistent finding is a significant association between paranormal belief and an intuitive thinking style, which is characterised as being quick and guided by emotion [148152]. A further study [153] also reports a significant partial correlation after controlling for sample type (online versus recruited face-to-face recruitment) owing to significantly higher levels of paranormal beliefs and intuitive thinking, and significantly lower rational/analytical thinking, in the online sample versus the face-to-face sample.

Contradictory findings, however, have emerged concerning paranormal beliefs and an analytical thinking style, which is thought to be more effortful and driven by logic. A positive relationship emerged in two studies [149, 150] while two [72, 152] found no relationship between paranormal beliefs and analytical thinking as assessed by the Rational Experiential Inventory (REI [102]). Four further studies report significant negative relationships between paranormal beliefs and analytical thinking using various measures: two [81, 154] used different versions of the Cognitive Reflection Test [103]; one [90] used the Rational Experiential Multimodal Inventory [155]; and one [153] used both the Argument Evaluation Test [156] and the Actively Open-Minded Thinking scale [156, 157]. A further study reported a significant negative relationship between paranormal beliefs and analytical thinking but could not replicate the finding [74].

The final two papers in this section document relationships between paranormal belief and other cognitive styles. Gianotti et al. [158] presented participants with 80 word-pairs (40 semantically indirectly related, 40 semantically unrelated), and they had to state if a third noun was semantically related to both words. Believers showed increased verbal creativity, making significantly more rare associations than sceptics for unrelated word-pairs, but not for indirectly related word-pairs. Hergovich [159] used the Gestaltwahrnehmungstest [160] to assess degree of field dependence, by presenting participants with figures in which they needed to find an embedded figure in the form of a house and reported a significant positive relationship between paranormal beliefs and field dependence.

Summary

Eight papers report positive associations between an intuitive thinking style and paranormal belief (although it should be noted that one study reported only a partial correlation after controlling for sample type). By contrast, evidence concerning an analytical thinking style is inconsistent, with reports of a negative relationship with belief (k = 4), a positive relationship (k = 2), and no relationship (k = 2). An additional study did report a negative relationship between analytical thinking and paranormal belief, but this was not replicated in a follow-up study. The final two studies in this section suggest positive relationships between paranormal belief and both verbal creativity and field dependence.

Executive function and memory

Six studies (n = 810) assessed memory or executive function. Turning first to memory, the findings are inconsistent. One study [161] showed paranormal belief predicted false memory responses on a questionnaire-based measure, and two others [59, 78] reported associations between belief and behavioural measures of false memories but failed to replicate this in additional samples. Dudley’s 1999 study [162] had participants complete the Paranormal Belief Scale while rehearsing a five-digit number or not; and found significantly higher paranormal belief scores in the group who had their working memory restricted (by the rehearsal task). However, a recent study by Gray and Gallo [79] failed to find any differences in working memory, episodic memory or autobiographical memory for believers and sceptics.

Further inconsistencies can be seen when exploring relationships between paranormal belief and inhibitory control, with Lindeman et al. [163] noting more errors from believers than sceptics on the Wisconsin Card Sorting Test [105, 106], but not on the Stroop task [164]. Wain and Spinella [165] explored executive function using a self-report measure and found a negative correlation between paranormal belief and executive functioning, with negative correlations between belief and both inhibition and organisation.

Summary

The studies in this section report inconsistent links between paranormal belief and memory. While three of four memory studies report links between paranormal beliefs and an increased tendency to create false memories, two of these studies failed to replicate the finding. Two studies assessing executive functioning both suggest poorer performance is associated with belief but may interact with the measure of executive functioning.

Other cognitive functions

Finally, four papers (n = 368) explored other aspects of cognitive function not covered by the categories already described. Pizzagalli et al. [166] tested the association between indirect semantic priming and paranormal beliefs using 240 prime-target word pairs, with target words either directly related, indirectly related, or unrelated to the prime word. Compared to sceptics, believers had shorter reaction times for indirectly related target words were presented in the left visual field, suggesting a faster appreciation of distant semantic associations which the authors view as evidence of disordered thought. The final three papers did not find any significant relationships between paranormal beliefs and: implicit sequence learning [167], cognitive complexity [88], or central monitoring efficiency [168].

General discussion

This systematic review provides the first evidence synthesis of the associations between paranormal beliefs and cognitive function since the early ‘90s [53] and the first assessment of study quality. The review identified 71 studies involving 20,993 participants. While most studies achieve good-strong quality ratings, specific areas of methodological weakness warrant further attention. In particular, studies often employ large numbers of measures, metrics and analyses, with no clearly identified primary outcome or adjustment of probability levels. These factors necessarily constrain any firm conclusions because of the high probability of Type 1 errors. Second, information about nonrespondents was either unreported or reported with insufficient detail to permit an assessment of potential nonresponse bias. Finally, up to a third of studies failed to discuss study limitations.

The cognitive deficits hypothesis is apparent in most papers (55/71), and a simple vote count shows that two-in-three studies (46/71) document that paranormal beliefs are associated with poorer cognitive performance. The most consistent findings across the six cognitive domains emerged between paranormal belief and an intuitive thinking style, with all eight studies confirming a positive association. Consistent findings also emerged for a bias towards confirmatory and disconfirmatory outcomes, as well as for poorer conditional reasoning ability and perception of randomness, though fewer studies were conducted in these areas. The two studies assessing executive functioning identified a negative association with paranormal belief but showed some inconsistency depending upon the type of executive test used. Associations with all other aspects of cognitive functioning (perceptual decision-making, jumping to conclusions and repetition avoidance, the conjunction fallacy, probabilistic reasoning, critical thinking ability, intelligence, analytical thinking style, and memory) have proven inconsistent, with nearly equal numbers of significant and null findings.

Various measurement issues, however, need to be considered. One concerns the large number of paranormal belief measures employed and their varied psychometric properties. The studies reviewed employed 26 different tests of paranormal belief, with the most common being the RPBS and a Rasch variant, with the next most common being 13 bespoke tests created by the authors. Such variability most likely contributes to heterogeneity across studies and potentially undermines the reliability of reported associations between cognitive functions and paranormal beliefs. For a full summary of the scales used in each study, see S8 Table.

Not only does the range of cognitive measures used within each cognitive domain contribute to heterogeneity across studies, but so does the reliability of such measures. As Hedge et al. [169] note, individual differences in relation to cognition and brain function often employ cognitive tasks that have been well-established in experimental research. Such tasks may not be directly adaptable to correlational research, however, for the very reason that they elicit robust experimental effects; they are specifically designed and selected for low between-participant variability. Most studies presented here are correlational and use a combination of established experimental tasks (e.g., the WCST, Raven’s Matrices, Cognitive Reflection Test, Embedded Figures Test) and questionnaire-based methods to assess cognition. This may undermine the reliability of reported associations between cognitive functions and paranormal beliefs if studies use experimentally derived cognitive tasks that are sub-optimal for correlational studies. Hedge et al. [169] offer several suggestions to overcome this, such as the use of alternative statistical techniques (e.g., structural equation modelling), factoring reliability into a-priori power calculations to reduce the risk of bias towards a null effect, or using within-subjects designs when the primary goal of the study is to examine associations between measures rather than focusing on individual differences per se. The largely correlational approach of studies reviewed here also suffers from the standard limitations of questionnaire studies and correlational designs. Although regression approaches can be powerful, they cannot establish causality without the use of longitudinal methods. This correlational approach also means that moderators and mediators of the relationship between paranormal beliefs and cognition remain underspecified.

Future directions–the fluid-executive model

The general trend of the current review accords with the cognitive deficits hypothesis approach described by Irwin almost 30 years ago [53]–at least insofar as around 60% of published studies document paranormal beliefs to be associated with poorer cognitive performance. Nonetheless, the cognitive deficits hypothesis does not provide an entirely satisfying account of why paranormal believers and sceptics perform differently on such a wide variety of cognitive tasks. This has some key implications: first, that people who believe in the paranormal seemingly have a disparate array of cognitive deficits–are these assumed to have occurred independently of each other or do they somehow accumulate various cognitive deficits? Another implication is that such an array of cognitive deficits is largely atheroetical, with various researchers pursuing seemingly independent lines of research linking cognitive function to paranormal beliefs with little attention to integration. Hence a somewhat underspecified model pervades the literature, with often limited justification for the specific role played by cognitive function in paranormal beliefs or how and why such an array of deficits are identifiable in paranormal believers. Given the almost complete lack of preregistration, accompanied by the large numbers of statistical analyses often conducted without correction, we also cannot exclude concerns about potential publication bias, false positives, and selection bias. Empirical studies presenting significant or favourable findings are, of course, more likely to be published [170]; and crucially, psychologists tend to rate studies as having better quality when they conform to prior expectations. Hergovich et al. [171] demonstrated this bias by presenting psychologists (all of whom did not believe in astrology) with descriptions of parapsychological studies, finding that they gave higher quality ratings to studies disproving astrological hypotheses. Participants were less likely to complete the study if they received an abstract confirming astrological hypotheses, with an attrition rate of 38.90%. These issues underscore the importance of pre-registered replications of key findings (see Laws [172] for a discussion). To our knowledge, potential publication bias has not been extensively assessed. A previous meta-analysis of psychokinesis studies indicated the presence of publication bias [173], but this claim has been challenged [174]. Finally, questions also arise about whether poorer performance by believers on any cognitive ability tests even merits the descriptor of ‘deficits’; and recently has been rephrased more neutrally as the cognitive differences hypothesis [79]. The term ‘deficit’ typically implies a permanent lack or loss of cognitive function; however, little to no research has looked at the consistency of cognitive performance in paranormal believers across time and established whether poorer cognitive performance is more trait than state dependent. While paranormal beliefs appear to be largely trait-like, they may have a state component [175].

While current studies do not necessarily endorse Irwin’s 1993 [53] comment that “…the believer in the paranormal is held variously to be illogical, irrational, credulous, uncritical, and foolish” (p.16), they converge on an underlying non-specific cognitive deficit or collection of deficits. Typically, when an array of cognitive deficits/differences are documented, researchers would want to know if specific areas of cognitive weakness emerge. Currently, no cognitive area suggests a specific deficit profile in paranormal believers. Although not directly tested, paranormal believers might display heterogeneous cognitive profiles that link to different paranormal belief components. Nonetheless, it is hard to see why or how specific types of paranormal belief content would link to different cognitive deficits.

One possibility is that the failure of any specific area of cognitive dysfunction to emerge (amongst perceptual and cognitive biases, reasoning, intelligence, critical thinking and academic performance, thinking style, and executive functioning), may point to a common shared underlying cognitive component. One feasible interpretation is that many of the tasks described in the various domains described here do in fact share a common cognitive ability—higher-order executive functions (planning, reasoning and problem-solving, impulse control, initiation, abstract reasoning, and mental flexibility), which in turn may be related to aspects of fluid intelligence [176].

Human functional brain imagining identifies strikingly similar patterns of prefrontal cortex activity in response to cognitive challenges across various seemingly different domains, including: increased perceptual difficulty (high vs low noise degradation), novelty, response conflict, working memory, episodic and semantic memory, problem solving, and task novelty [177179]. This demand-general activity underlies our ability to engage in flexible thought and problem-solving [177] and is closely linked to fluid intelligence [180]. We propose that the broad cognitive-deficit profile linked to paranormal beliefs may overlap with functions of the multiple-demand (MD) system. Part of the function of the MD system concerns its role in the separation and assembly of task components and that this accounts for the link with fluid intelligence. In this context, we suggest that each of the cognitive domains linked to paranormal beliefs may indeed be subserved by this MD system housed in the fronto-parietal cortex. The section on executive function is self-evidently linked with the frontal system. The section on intelligence similarly highlights links between paranormal beliefs and fluid IQ measures such as the Ravens Matrices [100, 101]. Studies further show the same MD system is recruited when confronted with perceptually difficult tasks (such as those outlined in the section on perceptual and cognitive biases for degraded visual input) [66, 67, 107, 108]. Aside from supporting our problem-solving ability, fluid intelligence and various aspects of executive functioning (e.g., working memory) underpins our ability to reason and to see relations among items and includes both inductive and deductive logical reasoning. The section on reasoning shows paranormal beliefs are related to conditional and probabilistic reasoning [69, 77, 80, 124134]. Thus, many of the cognitive deficit-paranormal belief associations may be reframed as the product of a single underlying fluid intelligence-executive component. Going forward, such a model suggests potential avenues of research. One prediction would be that groups of believers and sceptics matched for fluid IQ would be less likely differ on a range of cognitive tasks.

Limitations of the present review

The current review is the first to assess the quality of studies examining cognitive function and paranormal beliefs. We report study quality is good-to-strong, with interrater reliability on AXIS ratings being almost-perfect (93%). Individual AXIS items however are not weighted and any simple comparisons between specific studies across total summed quality scores should be regarded with caution [181183]. Thus, two studies with the same total quality score, but across different items, might not be comparable because some items may be more concerning to quality than others. Hence, we have focused on specific domains of strength or weakness across studies.

We acknowledge substantial limitations regarding the classification of studies into six areas of cognitive function: (1) perceptual and cognitive biases, (2) reasoning, (3) intelligence, critical thinking, and academic performance, (4) thinking style, (5) executive function, and (6) other cognitive functions. S9 Table shows that many of the studies could be re-classified and indeed, two-thirds (48/71) could be re-classified as assessing executive functioning. The latter is consistent with our proposal that a substantial proportion of the published studies may be documenting a relationship between paranormal beliefs and higher-level executive function/fluid intelligence.

Our preregistered protocol had an exclusion criterion concerning samples with individuals aged less than 18, and this led to our excluding 11 datasets (see S1 Table for a complete list and details; Aarnio & Lindeman [26], Saher & Lindeman [184], and Lindeman & Aarnio [185] were overlapping or identical samples). A key reason for exclusion was because age impacts both cognitive functions and paranormal beliefs. Certain cognitive functions, for example executive functions, take until late adolescence or early adulthood to mature [186]. Additionally, younger individuals also show higher levels of paranormal beliefs [187; for a discussion see Irwin’s review, 53]. While the exclusion of these studies is a potential limitation, their exclusion does not change our key findings or conclusions drawn from this review. In the same context, our lack of an upper age limit exclusion criterion could also be considered as a limitation. Sixteen papers (23%) reviewed here included participants aged 65+ (though 25/71 (36%) studies did not report on the age range of participants). While some cognitive functions do not mature until late adolescence or early adulthood, measurable changes in cognitive function occur with normal aging. Performance on certain cognitive tasks has been shown to decline with age, such as those requiring executive functioning (including decision-making, working memory and inhibitory control), visuoperceptual judgement and fluid-intelligence [188, 189]. Such cognitive declines have been associated with age-related reductions of white matter connections in brain regions including the prefrontal cortex [190, 191].

Finally, one limitation is that we were unable to conduct a meta-analysis because of the large variability in outcome measures within and between studies, which make it challenging to determine the precise outcome being tested. In parallel, the large numbers of analyses per study also mean that conclusions from our systematic review regarding findings for specific cognitive domains must also be interpreted with some caution.

Conclusions

Our systematic review identified 71 studies spanning: perceptual and cognitive biases, reasoning, intelligence, critical thinking, and academic performance, thinking styles, and executive function. However, then tasks employed to assess performance in each domain often appear to require higher-order executive functions and fluid intelligence. We therefore propose a new, more parsimonious, fluid-executive theory account for future research to consider. Methodological quality is generally good; however, we highlight specific theoretical and methodological weaknesses within the research area. In particular, we recommend future studies preregister their study design and proposed analyses prior to data collection, and address both the heterogeneity issues linked to paranormal belief measures and the reliability of cognitive tasks. We hope these methodological recommendations alongside the fluid-executive theory will help to further progress our understanding of the relationship between paranormal beliefs and cognitive function.

Supporting information

S1 Appendix. PRISMA abstract checklist.

(DOCX)

S2 Appendix. PRISMA checklist.

(DOCX)

S1 Table. Papers excluded from the review (participants < 18).

Note: Ts = Thinking Style, CPb = Cognitive and Perceptual Biases, O = Other Cognitive Functions, REI = Rational and Experiential Inventory (Epstein et al., 1996), SJQ = Scenario Judgements Questionnaire (Rogers et al., 2016; Rogers et al., 2011), IPO-RT = Inventory of Personality Organization (Lenzenweger et al., 2001), RT = reality testing, ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), ESP = extrasensory perception, LAD = life after death, PK = psychokinesis, NAP = new age philosophy, TPB = traditional paranormal beliefs, RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004; Lange et al., 2000), CKCS = Core Knowledge Confusions scale (Lindeman & Aarnio, 2007; Lindeman et al., 2008), CRT = Cognitive Reflection Test (Frederick, 2005), BRC = base-rate conflict, BRN = base-rate neutral, SREIT = Self-Report Emotional Intelligence Test (Schutte et al., 1998), WCQ = Ways of Coping Questionnaire (Folkman & Lazarus, 1988), IBI = Irrational Beliefs Inventory (Koopmans et al., 1994).

(DOCX)

S2 Table. Studies included in the systematic review concerning perceptual and cognitive biases.

Note: / = information not reported, P = perceptual biases, C = cognitive biases, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, BADE = bias against disconfirmatory evidence, BACE = bias against confirmatory evidence, TRB = traditional religious beliefs, ELF = extraordinary lifeforms, PRI = Personal Risk Inventory (Hockey et al., 2000), SFQ = Strange-Face Questionnaire (Caputo, 2015), IDAQ = Individual Differences in Anthropomorphism Quotient (Waytz et al., 2010), DS = Dualism Scale (Stanovich, 1989), EQ = Empathy Quotient (Baron-Cohen & Wheelwright, 2004).

(DOCX)

S3 Table. Studies included in the systematic review concerning reasoning.

Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, PK = psychokinesis, LAD = life after death, NAP = new age philosophy, DR = deductive reasoning, RTQ = Reasoning Task Questionnaire (Blackmore & Troscianko, 1985), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004), MMU-N = Manchester Metropolitan University New (Dagnall et al., 2010).

(DOCX)

S4 Table. Studies included in the systematic review concerning intelligence, critical thinking, and academic performance.

Note: / = information not reported, C = cognitive ability, I = intelligence, m = males, f = females, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ATS = Assessment of Thinking Skills (Wesp & Montgomery, 1998), WGCTA-S = Watson-Glaser Critical Thinking Appraisal Form S (Watson & Glaser, 1994), WGCTA = Watson-Glaser Critical Thinking Appraisal (Watson & Glaser, 2002; Watson & Glaser, 1980; Watson & Glaser, 1964), RPM = Raven’s Progressive Matrices (Raven et al., 2000), RPM Rasch Model = Raven’s Progressive Matrices Rasch Model (Rasch, 1960), MHVT = Mill Hill Vocabulary Test (Raven et al., 1998), CCTT = Cornell Critical Thinking Test (Ennis & Millman, 1985), WMT = Wiener Matrizen Test (Formann & Piswanger, 1979), APM = Advanced Progressive Matrices (Raven, 1976), WAIS-IS = Wechsler Adult Intelligence Scale Information Subtest (Wechsler, 1955), GPA = Grade Point Average.

(DOCX)

S5 Table. Studies included in the systematic review concerning thinking style.

Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, AOT = Actively Open-Minded Thinking Scale (Stanovich et al., 2016; Stanovich, 1999), CRT = Cognitive Reflection Test (Frederick, 2005), CRT-2 = Cognitive Reflection Test-2 (Thompson & Oppenheimer, 2016), REI = Rational-Experiential Inventory (Pacini & Epstein, 1999), WST = WordSum Test (Huang & Hauser, 1998), RI = Rational/Experiential Inventory (Norris & Epstein, 2011), IPSI-SF = Information-Processing Style Inventory Short Form (Naito et al., 2004), FIS = Faith in Intuition Scale (Pacini & Epstein 1999), NFC = Need for Cognition scale (Cacioppo et al., 1984), AET = Argument Evaluation Test (Stanovich & West, 1997), 10-Item REI = 10-Item Rational-Experiential Inventory (Epstein et al., 1996), GWT = Gestaltwahrnehmungs Test (Hergovich & Hörndler, 1994), EFT = Embedded Figures Test (Witkin et al., 1971).

(DOCX)

S6 Table. Studies included in the systematic review concerning executive function and memory.

Note: / = information not reported, M = memory, EF = executive function, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, DRM = Deese-Roediger-McDermott (Roediger & McDermott, 1995), CRT = Criterial Recollection Task (Gallo, 2013), IIT = Imagination Inflation Task (Garry et al., 1996), RSPAN = Reading-Span Task (Daneman & Carpenter, 1980), OSPAN = Operation Span Task (Turner & Engle, 1989), SILS = Shipley Institute of Living Scale (Zachary, 1986), AET = Argument Evaluation Task (Stanovich & West, 1997), RAT = Remote Associations Test (Mednick, 1962), WCST = Wisconsin Card Sorting Test (Berg, 1948; Grant & Berg, 1948), EFI = Executive Function Index (Spinella, 2005), ANP = anomalous natural phenomena, TRB = traditional religious beliefs, NCQ = News Coverage Questionnaire (Wilson & French, 2006), ASGS = Australian Sheep-Goat Scale (Thalbourne 1995; Thalbourne & Delin, 1993), AEI = Anomalous Experiences Inventory (Kumar et al., 1994).

(DOCX)

S7 Table. Studies included in the review concerning other cognitive functions.

Note: / = information not reported, bl = believers, sc = sceptics, f = females, m = males, ISL = implicit sequence learning, ISP = implicit semantic priming, VF = visual field, LVF = left visual field, RVF = right visual field, CME = central monitoring efficiency, RE = reasoning errors, CC = cognitive complexity, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, SPQ-B = Schizotypal Personality Questionnaire Brief (Raine & Benishay, 1995), RCRG = Role Construct Repertory Grid (Kelly, 1955).

(DOCX)

S8 Table. Measures of paranormal beliefs used in the 71 studies included in the review.

Note:† = papers that provided reliability statistics for their novel scales, ‡ = used a translated version of the original scale, * = Musch & Ehrenberg (2002) developed a novel scale that was later named the BPS and was used in two subsequent studies. RPBS = Revised Paranormal Belief Scale (Tobacyk 1988; 2004), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), PBS = Paranormal Belief Scale (Tobacyk & Milford, 1982), Rasch RPBS = Rasch devised Revised Paranormal Belief Scale (Lange et al., 2000), BPS-O = Belief in the Paranormal Scale (Original; Jones et al., 1977), BPS = Belief in the Paranormal Scale (Musch & Ehrenberg, 2002), MMU-N = Manchester Metropolitan University New (see Dagnall et al., 2010), MMU-PS = Manchester Metropolitan University Paranormal Scale (see Dagnall et al., 2010), SSUB = Survery of Scientifically Unsubstantiated Beliefs (Irwin & Marks, 2013), OS = Occultism Scale (Böttinger, 1976), PS = Paranormal Scale (Orenstein, 2002), AEI = Anomalous Experiences Inventory (Gallagher et al., 1994; includes a ‘belief’ subscale).

(DOCX)

S9 Table. Alternate categorisations for studies included in the review.

Note: = original category, ✓ = alternate category.

(DOCX)

Data Availability

All data files relating to the quality assessment are available from the OSF repository (https://osf.io/7bthg/). Data relating to the 71 reviewed studies can be found within the paper's Supporting Information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Broad CD. The relevance of psychical research to philosophy. Philosophy. 1949. Oct;24(91):291–309. [Google Scholar]
  • 2.Research BMG. BMG Halloween Poll: A third of Brits believe in ghosts, spirits, or other types of paranormal activity [Internet]. Birmingham (UK): BMG Research; 2017. Oct [cited 2021 Oct]. Available from: https://www.bmgresearch.co.uk/bmg-halloween-poll-third-brits-believe-ghosts-spirits-types-paranormal-activity/ [Google Scholar]
  • 3.Pechey R, Halligan P. The prevalence of delusion-like beliefs relative to sociocultural beliefs in the general population. Psychopathology. 2011;44(2):106–15. doi: 10.1159/000319788 [DOI] [PubMed] [Google Scholar]
  • 4.Pérez Navarro JM, Martínez Guerra X. Personality, cognition, and morbidity in the understanding of paranormal belief. PsyCh journal. 2020. Feb;9(1):118–31. doi: 10.1002/pchj.295 [DOI] [PubMed] [Google Scholar]
  • 5.Eder E, Turic K, Milasowszky N, Van Adzin K, Hergovich A. The relationships between paranormal belief, creationism, intelligent design and evolution at secondary schools in Vienna (Austria). Science & Education. 2011. May;20(5):517–34. [Google Scholar]
  • 6.Göritz AS, Schumacher J. The WWW as a research medium: An illustrative survey on paranormal belief. Perceptual and Motor Skills. 2000. Jun;90(3_suppl):1195–206. doi: 10.2466/pms.2000.90.3c.1195 [DOI] [PubMed] [Google Scholar]
  • 7.Clarke D. Belief in the paranormal: A New Zealand survey. Journal of the Society for Psychical Research. 1991. Apr. [Google Scholar]
  • 8.Tobacyk J, Milford G. Belief in paranormal phenomena: assessment instrument development and implications for personality functioning. Journal of personality and social psychology. 1983. May;44(5):1029. [Google Scholar]
  • 9.Tobacyk JJ. A revised paranormal belief scale. The International Journal of Transpersonal Studies. 2004;23(23):94–8. [Google Scholar]
  • 10.Thalbourne MA, Delin PS. A new instrument for measuring the sheep-goat variable: its psychometric properties and factor structure. Journal of the Society for Psychical Research. 1993. Jul. [Google Scholar]
  • 11.Drinkwater K, Denovan A, Dagnall N, Parker A. The Australian sheep-goat scale: an evaluation of factor structure and convergent validity. Frontiers in psychology. 2018. Aug 28;9:1594. doi: 10.3389/fpsyg.2018.01594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Drinkwater K, Denovan A, Dagnall N, Parker A. An assessment of the dimensionality and factorial structure of the revised paranormal belief scale. Frontiers in Psychology. 2017. Sep 26;8:1693. doi: 10.3389/fpsyg.2017.01693 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Wiseman R, Watt C. Measuring superstitious belief: Why lucky charms matter. Personality and individual differences. 2004. Dec 1;37(8):1533–41. [Google Scholar]
  • 14.Pennycook G, Cheyne JA, Barr N, Koehler DJ, Fugelsang JA. On the reception and detection of pseudo-profound bullshit. Judgment and Decision making. 2015. Nov 1;10(6):549–63. [Google Scholar]
  • 15.Willard AK, Norenzayan A. Cognitive biases explain religious belief, paranormal belief, and belief in life’s purpose. Cognition. 2013. Nov 1;129(2):379–91. doi: 10.1016/j.cognition.2013.07.016 [DOI] [PubMed] [Google Scholar]
  • 16.Lindeman M. Biases in intuitive reasoning and belief in complementary and alternative medicine. Psychology and Health. 2011. Mar 1;26(3):371–82. doi: 10.1080/08870440903440707 [DOI] [PubMed] [Google Scholar]
  • 17.Brotherton R, French CC, Pickering AD. Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in psychology. 2013. May 21;4:279. doi: 10.3389/fpsyg.2013.00279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Cardeña E, Marcusson-Clavertz D, Wasmuth J. Hypnotizability and dissociation as predictors of performance in a precognition task: A pilot study. Journal of Parapsychology. 2009. Mar 1;73(1). [Google Scholar]
  • 19.Lobato E, Mendoza J, Sims V, Chin M. Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology. 2014. Sep;28(5):617–25. [Google Scholar]
  • 20.Williams E, Francis LJ, Robbins M. Personality and paranormal belief: A study among adolescents. Pastoral Psychology. 2007. Sep 1;56(1):9–14. [Google Scholar]
  • 21.Peltzer K. Paranormal beliefs and personality among black South African students. Social behavior and personality. 2002. May 20;30(4):391. [Google Scholar]
  • 22.Bader CD, Baker JO, Molle A. Countervailing forces: Religiosity and paranormal belief in Italy. Journal for the Scientific Study of Religion. 2012. Dec;51(4):705–20. [Google Scholar]
  • 23.Van den Bulck J, Custers K. Belief in complementary and alternative medicine is related to age and paranormal beliefs in adults. European Journal of Public Health. 2010. Apr 1;20(2):227–30. doi: 10.1093/eurpub/ckp104 [DOI] [PubMed] [Google Scholar]
  • 24.Sparks G, Miller W. Investigating the relationship between exposure to television programs that depict paranormal phenomena and beliefs in the paranormal. Communication Monographs. 2001. Mar 1;68(1):98–113. [Google Scholar]
  • 25.Andrews RA, Tyson P. The superstitious scholar: Paranormal belief within a student population and its relationship to academic ability and discipline. Journal of Applied Research in Higher Education. 2019. Jul 1. [Google Scholar]
  • 26.Aarnio K, Lindeman M. Paranormal beliefs, education, and thinking styles. Personality and individual differences. 2005. Nov 1;39(7):1227–36. [Google Scholar]
  • 27.Ward SJ, King LA. Examining the roles of intuition and gender in magical beliefs. Journal of Research in Personality. 2020. Jun 1;86:103956. [Google Scholar]
  • 28.Rogers P, Fisk JE, Lowrie E. Paranormal belief, thinking style preference and susceptibility to confirmatory conjunction errors. Consciousness and Cognition. 2018. Oct 1;65:182–96. doi: 10.1016/j.concog.2018.07.013 [DOI] [PubMed] [Google Scholar]
  • 29.van Elk M. The self-attribution bias and paranormal beliefs. Consciousness and cognition. 2017. Mar 1;49:313–21. doi: 10.1016/j.concog.2017.02.001 [DOI] [PubMed] [Google Scholar]
  • 30.Rogers P, Qualter P, Wood D. The impact of event vividness, event severity, and prior paranormal belief on attributions towards a depicted remarkable coincidence experience: Two studies examining the misattribution hypothesis. British Journal of Psychology. 2016. Nov;107(4):710–51. doi: 10.1111/bjop.12173 [DOI] [PubMed] [Google Scholar]
  • 31.Voracek M. Who wants to believe? Associations between digit ratio (2D: 4D) and paranormal and superstitious beliefs. Personality and Individual Differences. 2009. Jul 1;47(2):105–9. [Google Scholar]
  • 32.Watt C, Watson S, Wilson L. Cognitive and psychological mediators of anxiety: Evidence from a study of paranormal belief and perceived childhood control. Personality and Individual Differences. 2007. Jan 1;42(2):335–43. [Google Scholar]
  • 33.Lange R, Irwin HJ, Houran J. Objective measurement of paranormal belief: A rebuttal to Vitulli. Psychological Reports. 2001. Jun;88(3):641–4. doi: 10.2466/pr0.2001.88.3.641 [DOI] [PubMed] [Google Scholar]
  • 34.Vitulli WF, Tipton SM, Rowe JL. Beliefs in the paranormal: Age and sex differences among elderly persons and undergraduate students. Psychological Reports. 1999. Dec;85(3):847–55. doi: 10.2466/pr0.1999.85.3.847 [DOI] [PubMed] [Google Scholar]
  • 35.Irwin HJ. Age and sex differences in paranormal beliefs: a response to Vitulli, Tipton, and Rowe (1999). Psychological Reports. 2000. Apr;86(2):595–6. doi: 10.2466/pr0.2000.86.2.595 [DOI] [PubMed] [Google Scholar]
  • 36.Vitulli WF. Rejoinder to Irwin’s (2000)“Age and Sex Differences in Paranormal Beliefs: A Response to Vitulli, Tipton, and Rowe (1999)”. Psychological reports. 2000. Oct;87(2):699–700. doi: 10.2466/pr0.2000.87.2.699 [DOI] [PubMed] [Google Scholar]
  • 37.Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive psychology. 2000. Aug 1;41(1):49–100. doi: 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
  • 38.Hosseini S, Chaurasia A, Oremus M. The effect of religion and spirituality on cognitive function: A systematic review. The Gerontologist. 2019. Mar 14;59(2):e76–85. doi: 10.1093/geront/gnx024 [DOI] [PubMed] [Google Scholar]
  • 39.Kaufman Y, Anaki D, Binns M, Freedman M. Cognitive decline in Alzheimer disease: Impact of spirituality, religiosity, and QOL. Neurology. 2007. May 1;68(18):1509–14. doi: 10.1212/01.wnl.0000260697.66617.59 [DOI] [PubMed] [Google Scholar]
  • 40.Kraal AZ, Sharifian N, Zaheed AB, Sol K, Zahodne LB. Dimensions of religious involvement represent positive pathways in cognitive aging. Research on aging. 2019. Oct;41(9):868–90. doi: 10.1177/0164027519862745 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Ritchie SJ, Gow AJ, Deary IJ. Religiosity is negatively associated with later-life intelligence, but not with age-related cognitive decline. Intelligence. 2014. Sep 1;46:9–17. doi: 10.1016/j.intell.2014.04.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Zuckerman M, Silberman J, Hall JA. The relation between intelligence and religiosity: A meta-analysis and some proposed explanations. Personality and social psychology review. 2013. Nov;17(4):325–54. doi: 10.1177/1088868313497266 [DOI] [PubMed] [Google Scholar]
  • 43.Georgiou N, Delfabbro P, Balzan R. Conspiracy beliefs in the general population: The importance of psychopathology, cognitive style and educational attainment. Personality and Individual Differences. 2019. Dec 1;151:109521. [Google Scholar]
  • 44.Mikušková EB. Conspiracy beliefs of future teachers. Current Psychology. 2018. Sep;37(3):692–701. [Google Scholar]
  • 45.van der Wal RC, Sutton RM, Lange J, Braga JP. Suspicious binds: Conspiracy thinking and tenuous perceptions of causal connections between co‐occurring and spuriously correlated events. European Journal of Social Psychology. 2018. Dec;48(7):970–89. doi: 10.1002/ejsp.2507 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.van Prooijen JW, Douglas KM, De Inocencio C. Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European journal of social psychology. 2018. Apr;48(3):320–35. doi: 10.1002/ejsp.2331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Su Y, Lee DK, Xiao X, Li W, Shu W. Who endorses conspiracy theories? A moderated mediation model of Chinese and international social media use, media skepticism, need for cognition, and COVID-19 conspiracy theory endorsement in China. Computers in Human Behavior. 2021. Jul 1;120:106760. doi: 10.1016/j.chb.2021.106760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Denovan A, Dagnall N, Drinkwater K, Parker A, Neave N. Conspiracist beliefs, intuitive thinking, and schizotypal facets: a further evaluation. Applied Cognitive Psychology. 2020. Nov;34(6):1394–405. [Google Scholar]
  • 49.Alper S, Bayrak F, Yilmaz O. Psychological correlates of COVID-19 conspiracy beliefs and preventive measures: Evidence from Turkey. Current psychology. 2020. Jun 29:1–0. doi: 10.1007/s12144-020-00903-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Georgiou N, Delfabbro P, Balzan R. Conspiracy-Beliefs and Receptivity to Disconfirmatory Information: A Study Using the BADE Task. SAGE Open. 2021. Mar;11(1):21582440211006131. [Google Scholar]
  • 51.Lamberty PK, Hellmann JH, Oeberst A. The winner knew it all? Conspiracy beliefs and hindsight perspective after the 2016 US general election. Personality and Individual Differences. 2018. Mar 1;123:236–40. [Google Scholar]
  • 52.Collins L. Bullspotting: finding facts in the age of misinformation. Prometheus Books; 2012. Oct 30. [Google Scholar]
  • 53.Irwin HJ. Belief in the paranormal: A review of the empirical literature. Journal of the american society for Psychical research. 1993. Jan 1;87(1):1–39. [Google Scholar]
  • 54.Briner RB, Denyer D. Systematic review and evidence synthesis as a practice and scholarship tool. Handbook of evidence-based management: Companies, classrooms and research. 2012. Nov:112–29. [Google Scholar]
  • 55.Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Bmj. 2021. Mar 29;372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Mallett R, Hagen-Zanker J, Slater R, Duvendack M. The benefits and challenges of using systematic reviews in international development research. Journal of development effectiveness. 2012. Sep 1;4(3):445–55. [Google Scholar]
  • 57.Harari MB, Parola HR, Hartwell CJ, Riegelman A. Literature searches in systematic reviews and meta-analyses: A review, evaluation, and recommendations. Journal of Vocational Behavior. 2020. Apr 1;118:103377. [Google Scholar]
  • 58.Hartshorne JK, Germine LT. When does cognitive functioning peak? The asynchronous rise and fall of different cognitive abilities across the life span. Psychological science. 2015. Apr;26(4):433–43. doi: 10.1177/0956797614567339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Greening EK. The relationship between false memory and paranormal belief [dissertation]. Hertfordshire, UK: University of Hertfordshire; 2002. [Google Scholar]
  • 60.Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ open. 2016. Dec 1;6(12):e011458. doi: 10.1136/bmjopen-2016-011458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Lannoy S, Duka T, Carbia C, Billieux J, Fontesse S, Dormal V, et al. Emotional processes in binge drinking: A systematic review and perspective. Clinical Psychology Review. 2021. Jan 13:101971. doi: 10.1016/j.cpr.2021.101971 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Cohen J. Statistical power analysis for the social sciences. 2nd ed. New York: Lawrence Erlbaum Associates; 1988. [Google Scholar]
  • 63.Brugger P, Regard M, Landis T. Belief in extrasensory perception and illusory control: A replication. Journal of Psychology. 1991. Jul 1;125(4):501–2. [Google Scholar]
  • 64.Prike T, Arnold MM, Williamson P. The relationship between anomalistic belief and biases of evidence integration and jumping to conclusions. Acta psychologica. 2018. Oct 1;190:217–27. doi: 10.1016/j.actpsy.2018.08.006 [DOI] [PubMed] [Google Scholar]
  • 65.Schienle A, Vaitl D, Stark R. Covariation bias and paranormal belief. Psychological reports. 1996. Feb;78(1):291–305. doi: 10.2466/pr0.1996.78.1.291 [DOI] [PubMed] [Google Scholar]
  • 66.Van Elk M. Perceptual biases in relation to paranormal and conspiracy beliefs. PloS one. 2015. Jun 26;10(6):e0130422. doi: 10.1371/journal.pone.0130422 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Simmonds-Moore C. Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena. Consciousness and cognition. 2014. Aug 1;28:30–46. doi: 10.1016/j.concog.2014.06.004 [DOI] [PubMed] [Google Scholar]
  • 68.Irwin HJ, Drinkwater K, Dagnall N. Are believers in the paranormal inclined to jump to conclusions?. Australian Journal of Parapsychology. 2014. Jun;14(1):69–82. [Google Scholar]
  • 69.Denovan A, Dagnall N, Drinkwater K, Parker A. Latent profile analysis of schizotypy and paranormal belief: Associations with probabilistic reasoning performance. Frontiers in psychology. 2018. Jan 26;9:35. doi: 10.3389/fpsyg.2018.00035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Wilson JA. Reducing pseudoscientific and paranormal beliefs in university students through a course in science and critical thinking. Science & Education. 2018. Mar;27(1):183–210. [Google Scholar]
  • 71.Betsch T, Aßmann L, Glöckner A. Paranormal beliefs and individual differences: story seeking without reasoned review. Heliyon. 2020. Jun 1;6(6):e04259. doi: 10.1016/j.heliyon.2020.e04259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Irwin HJ. Thinking style and the making of a paranormal disbelief. Journal of the Society for Psychical Research. 2015. Jul 1;79(920):129–39. [Google Scholar]
  • 73.Krummenacher P, Mohr C, Haker H, Brugger P. Dopamine, paranormal belief, and the detection of meaningful stimuli. Journal of Cognitive Neuroscience. 2010. Aug 1;22(8):1670–81. doi: 10.1162/jocn.2009.21313 [DOI] [PubMed] [Google Scholar]
  • 74.Ballová Mikušková E, Čavojová V. The effect of analytic cognitive style on credulity. Frontiers in psychology. 2020. Oct 15;11:2682. doi: 10.3389/fpsyg.2020.584424 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Laws KR. Negativland-a home for all findings in psychology. BMC psychology. 2013. Dec;1(1):1–8. doi: 10.1186/2050-7283-1-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Cardeña E. The experimental evidence for parapsychological phenomena: A review. American Psychologist. 2018. Jul;73(5):663–77. doi: 10.1037/amp0000236 [DOI] [PubMed] [Google Scholar]
  • 77.Blackmore SJ. Probability misjudgment and belief in the paranormal: A newspaper survey. British Journal of Psychology. 1997. Nov;88(4):683–9. [Google Scholar]
  • 78.Lawrence E, Peters E. Reasoning in believers in the paranormal. The Journal of nervous and mental disease. 2004. Nov 1;192(11):727–33. doi: 10.1097/01.nmd.0000144691.22135.d0 [DOI] [PubMed] [Google Scholar]
  • 79.Gray SJ, Gallo DA. Paranormal psychic believers and skeptics: a large-scale test of the cognitive differences hypothesis. Memory & cognition. 2016. Feb;44(2):242–61. doi: 10.3758/s13421-015-0563-x [DOI] [PubMed] [Google Scholar]
  • 80.Prike T, Arnold MM, Williamson P. Psychics, aliens, or experience? Using the Anomalistic Belief Scale to examine the relationship between type of belief and probabilistic reasoning. Consciousness and cognition. 2017. Aug 1;53:151–64. doi: 10.1016/j.concog.2017.06.003 [DOI] [PubMed] [Google Scholar]
  • 81.Ståhl T, Van Prooijen JW. Epistemic rationality: Skepticism toward unfounded beliefs requires sufficient cognitive ability and motivation to be rational. Personality and Individual Differences. 2018. Feb 1;122:155–63. [Google Scholar]
  • 82.Stroebe W, Gadenne V, Nijstad BA. Do our psychological laws apply only to college students?: External validity revisited. Basic and Applied Social Psychology. 2018. Nov 2;40(6):384–95. [Google Scholar]
  • 83.Dean CE, Akhtar S, Gale TM, Irvine K, Wiseman R, Laws KR. Development of the Paranormal and Supernatural Beliefs Scale using classical and modern test theory. BMC Psychology. 2021. Dec;9(1):1–20. doi: 10.1186/s40359-020-00492-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Lange R, Thalbourne MA. Rasch scaling paranormal belief and experience: Structure and semantics of Thalbourne’s Australian Sheep-Goat Scale. Psychological Reports. 2002. Dec;91(3_suppl):1065–73. doi: 10.2466/pr0.2002.91.3f.1065 [DOI] [PubMed] [Google Scholar]
  • 85.Lange R, Irwin HJ, Houran J. Top-down purification of Tobacyk’s revised paranormal belief scale. Personality and individual differences. 2000. Jul 1;29(1):131–56. [Google Scholar]
  • 86.Werner S, Praxedes M, Kim HG. The reporting of nonresponse analyses in survey research. Organizational Research Methods. 2007. Apr;10(2):287–95. [Google Scholar]
  • 87.Hawkins DF. Estimation of nonresponse bias. Sociological Methods & Research. 1975. May;3(4):461–88. [Google Scholar]
  • 88.Tobacyk J. Cognitive complexity and paranormal beliefs. Psychological Reports. 1983. Feb;52(1):101–2. [Google Scholar]
  • 89.Prince M. Epidemiology. In Wright P, Stern J, Phelan M, editors. Core Psychiatry. Elsevier Ltd; 2012. p. 115–29. [Google Scholar]
  • 90.Lindeman M, Svedholm‐Häkkinen AM. Does poor understanding of physical world predict religious and paranormal beliefs?. Applied Cognitive Psychology. 2016. Sep;30(5):736–42. [Google Scholar]
  • 91.Kontto J, Tolonen H, Salonen AH. What are we missing? The profile of non-respondents in the Finnish gambling 2015 survey. Scandinavian journal of public health. 2020. Feb;48(1):80–7. doi: 10.1177/1403494819849283 [DOI] [PubMed] [Google Scholar]
  • 92.Tolonen H, Dobson A, Kulathinal S, WHO Monica Project. Effect on trend estimates of the difference between survey respondents and non-respondents: results from 27 populations in the WHO MONICA Project. European journal of epidemiology. 2005. Nov 1;20(11):887–98. doi: 10.1007/s10654-005-2672-5 [DOI] [PubMed] [Google Scholar]
  • 93.Dalecki MG, Whitehead JC, Blomquist GC. Sample non-response bias and aggregate benefits in contingent valuation: an examination of early, late and non-respondents. Journal of Environmental Management. 1993. Jun 1;38(2):133–43. [Google Scholar]
  • 94.Gannon MJ, Nothern JC, Carroll SJ. Characteristics of nonrespondents among workers. Journal of Applied Psychology. 1971. Dec;55(6):586–8. [Google Scholar]
  • 95.Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). Journal of medical Internet research. 2004. Sep 29;6(3):e132. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Ioannidis JP. Limitations are not properly acknowledged in the scientific literature. Journal of clinical epidemiology. 2007. Apr 1;60(4):324–9. doi: 10.1016/j.jclinepi.2006.09.011 [DOI] [PubMed] [Google Scholar]
  • 97.Horton R. The hidden research paper. Jama. 2002. Jun 5;287(21):2775–8. doi: 10.1001/jama.287.21.2775 [DOI] [PubMed] [Google Scholar]
  • 98.Blackmore S, Trościanko T. Belief in the paranormal: Probability judgements, illusory control, and the ‘chance baseline shift’. British journal of Psychology. 1985. Nov;76(4):459–68. [Google Scholar]
  • 99.Watson G. Watson-Glaser critical thinking appraisal. San Antonio, TX: Psychological Corporation; 1980. [Google Scholar]
  • 100.Raven JS. Raven’s Matrices: The Advanced Progressive Matrices Set 1. Oxford, England: Oxford Psychologists Press; 1976. [Google Scholar]
  • 101.Raven J, Raven JC, Court JH. Manual for standard progressive matrices 2000 edition. Oxford, England: Oxford Psychologists Press; 2000. [Google Scholar]
  • 102.Pacini R, Epstein S. The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratio-bias phenomenon. Journal of personality and social psychology. 1999. Jun;76(6):972–87. doi: 10.1037//0022-3514.76.6.972 [DOI] [PubMed] [Google Scholar]
  • 103.Frederick S. Cognitive reflection and decision making. Journal of Economic perspectives. 2005. Dec;19(4):25–42. [Google Scholar]
  • 104.Roediger HL, McDermott KB. Creating false memories: Remembering words not presented in lists. Journal of experimental psychology: Learning, Memory, and Cognition. 1995. Jul;21(4):803–14. [Google Scholar]
  • 105.Berg EA. A simple objective technique for measuring flexibility in thinking. The Journal of general psychology. 1948. Jul 1;39(1):15–22. doi: 10.1080/00221309.1948.9918159 [DOI] [PubMed] [Google Scholar]
  • 106.Grant DA, Berg E. A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. Journal of experimental psychology. 1948. Aug;38(4):404–11. doi: 10.1037/h0059831 [DOI] [PubMed] [Google Scholar]
  • 107.Blackmore S, Moore R. Seeing things: Visual recognition and belief in the paranormal. European Journal of Parapsychology. 1994;10(1994):91–103. [Google Scholar]
  • 108.Riekki T, Lindeman M, Aleneff M, Halme A, Nuortimo A. Paranormal and religious believers are more prone to illusory face perception than skeptics and non‐believers. Applied Cognitive Psychology. 2013. Mar;27(2):150–5. [Google Scholar]
  • 109.Caputo GB. Strange-face illusions during interpersonal-gazing and personality differences of spirituality. Explore. 2017. Nov 1;13(6):379–85. doi: 10.1016/j.explore.2017.04.019 [DOI] [PubMed] [Google Scholar]
  • 110.Caputo GB. Dissociation and hallucinations in dyads engaged through interpersonal gazing. Psychiatry research. 2015. Aug 30;228(3):659–63. doi: 10.1016/j.psychres.2015.04.050 [DOI] [PubMed] [Google Scholar]
  • 111.Van Elk M. Paranormal believers are more prone to illusory agency detection than skeptics. Consciousness and cognition. 2013. Sep 1;22(3):1041–6. doi: 10.1016/j.concog.2013.07.004 [DOI] [PubMed] [Google Scholar]
  • 112.Griffiths O, Shehabi N, Murphy RA, Le Pelley ME. Superstition predicts perception of illusory control. British Journal of Psychology. 2019. Aug;110(3):499–518. doi: 10.1111/bjop.12344 [DOI] [PubMed] [Google Scholar]
  • 113.Blanco F, Barberia I, Matute H. Individuals who believe in the paranormal expose themselves to biased information and develop more causal illusions than nonbelievers in the laboratory. PloS one. 2015. Jul 15;10(7):e0131378. doi: 10.1371/journal.pone.0131378 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Rudski J. The illusion of control, superstitious belief, and optimism. Current Psychology. 2004. Dec 1;22(4):306–15. [Google Scholar]
  • 115.Drinkwater K, Dagnall N, Denovan A, Parker A. The moderating effect of mental toughness: perception of risk and belief in the paranormal. Psychological reports. 2019. Feb;122(1):268–87. doi: 10.1177/0033294118756600 [DOI] [PubMed] [Google Scholar]
  • 116.Gagné H, McKelvie SJ. Effects of paranormal beliefs on response bias and self-assessment of performance in a signal detection task. Australian journal of psychology. 1990. Aug 1;42(2):187–95. [Google Scholar]
  • 117.Dudley RE, John CH, Young AW, Over DE. Normal and abnormal reasoning in people with delusions. British Journal of Clinical Psychology. 1997. May;36(2):243–58. doi: 10.1111/j.2044-8260.1997.tb01410.x [DOI] [PubMed] [Google Scholar]
  • 118.Peters E, Joseph S, Day S, Garety P. Measuring delusional ideation: the 21-item Peters et al. Delusions Inventory (PDI). Schizophrenia bulletin. 2004. Jan 1;30(4):1005–22. doi: 10.1093/oxfordjournals.schbul.a007116 [DOI] [PubMed] [Google Scholar]
  • 119.Lesaffre L, Kuhn G, Jopp DS, Mantzouranis G, Diouf CN, Rochat D, et al. Talking to the Dead in the Classroom: How a Supposedly Psychic Event Impacts Beliefs and Feelings. Psychological Reports. 2020. Sep 5:0033294120961068. [DOI] [PubMed] [Google Scholar]
  • 120.Barberia I, Tubau E, Matute H, Rodríguez-Ferreiro J. A short educational intervention diminishes causal illusions and specific paranormal beliefs in undergraduates. PLoS One. 2018. Jan 31;13(1):e0191907. doi: 10.1371/journal.pone.0191907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Garety PA, Freeman D, Jolley S, Dunn G, Bebbington PE, Fowler DG, et al. Reasoning, emotions, and delusional conviction in psychosis. Journal of abnormal psychology. 2005. Aug;114(3):373. doi: 10.1037/0021-843X.114.3.373 [DOI] [PubMed] [Google Scholar]
  • 122.Peters E, Moritz S, Wiseman Z, Greenwood K, Kuipers E, Schwannauer M, et al. The cognitive biases questionnaire for psychosis (CBQP). Schizophrenia Research [Internet]. 2010. [cited 2022 Mar] Apr;117(2–3):413. Available from: https://www.sciencedirect.com/science/article/pii/S0920996410008443?via%3Dihub doi: 10.1016/j.schres.2010.02.759 [DOI] [Google Scholar]
  • 123.Peters ER, Moritz S, Schwannauer M, Wiseman Z, Greenwood KE, Scott J, et al. Cognitive biases questionnaire for psychosis. Schizophrenia bulletin. 2014. Mar 1;40(2):300–13. doi: 10.1093/schbul/sbs199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Dagnall N, Denovan A, Drinkwater K, Parker A, Clough P. Toward a better understanding of the relationship between belief in the paranormal and statistical bias: the potential role of schizotypy. Frontiers in psychology. 2016. Jul 14;7:1045. doi: 10.3389/fpsyg.2016.01045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Dagnall N, Drinkwater K, Denovan A, Parker A, Rowley K. Misperception of chance, conjunction, framing effects and belief in the paranormal: a further evaluation. Applied Cognitive Psychology. 2016. May;30(3):409–19. [Google Scholar]
  • 126.Dagnall N, Parker A, Munley G. Paranormal belief and reasoning. Personality and Individual Differences. 2007. Oct 1;43(6):1406–15. [Google Scholar]
  • 127.Dagnall N, Drinkwater K, Parker A, Rowley K. Misperception of chance, conjunction, belief in the paranormal and reality testing: a reappraisal. Applied Cognitive Psychology. 2014. Sep;28(5):711–9. [Google Scholar]
  • 128.Rogers P, Davis T, Fisk J. Paranormal belief and susceptibility to the conjunction fallacy. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 2009. May;23(4):524–42 [Google Scholar]
  • 129.Rogers P, Fisk JE, Lowrie E. Paranormal believers’ susceptibility to confirmatory versus disconfirmatory conjunctions. Applied Cognitive Psychology. 2016. Jul;30(4):628–34. [Google Scholar]
  • 130.Musch J, Ehrenberg K. Probability misjudgment, cognitive ability, and belief in the paranormal. British Journal of Psychology. 2002. May;93(2):169–77. [DOI] [PubMed] [Google Scholar]
  • 131.Brugger P, Landis T, Regard M. A ‘sheep‐goat effect’in repetition avoidance: Extra‐sensory perception as an effect of subjective probability?. British Journal of Psychology. 1990. Nov;81(4):455–68. [Google Scholar]
  • 132.Bressan P. The connection between random sequences, everyday coincidences, and belief in the paranormal. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 2002. Jan;16(1):17–34. [Google Scholar]
  • 133.Roberts MJ, Seager PB. Predicting belief in paranormal phenomena: A comparison of conditional and probabilistic reasoning. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 1999. Oct;13(5):443–50. [Google Scholar]
  • 134.Wierzbicki M. Reasoning errors and belief in the paranormal. The Journal of Social Psychology. 1985. Aug 1;125(4):489–94. [Google Scholar]
  • 135.McLean CP, Miller NA. Changes in critical thinking skills following a course on science and pseudoscience: A quasi-experimental study. Teaching of Psychology. 2010. Apr;37(2):85–90. [Google Scholar]
  • 136.Alcock JE, Otis LP. Critical thinking and belief in the paranormal. Psychological Reports. 1980. Apr;46(2):479–82. [Google Scholar]
  • 137.Watson G, Glaser EM. Critical Thinking Appraisal. New York: Harcourt, Brace & World; 1964. [Google Scholar]
  • 138.Morgan RK, Morgan DL. Critical thinking and belief in the paranormal. College Student Journal. 1998;32(1):135–9. [Google Scholar]
  • 139.Hergovich A, Arendasy M. Critical thinking ability and belief in the paranormal. Personality and Individual Differences. 2005. Jun 1;38(8):1805–12. [Google Scholar]
  • 140.Roe CA. Critical thinking and belief in the paranormal: A re‐evaluation. British Journal of Psychology. 1999. Feb;90(1):85–98. doi: 10.1348/000712699161288 [DOI] [PubMed] [Google Scholar]
  • 141.Royalty J. The generalizability of critical thinking: Paranormal beliefs versus statistical reasoning. The Journal of Genetic Psychology. 1995. Dec 1;156(4):477–88. [Google Scholar]
  • 142.Formann AK, PiswangerK. WMT—Wiener Matrizentest. Weinheim: Beltz Test; 1979. [Google Scholar]
  • 143.Tobacyk J. Paranormal belief and college grade point average. Psychological Reports. 1984. Feb;54(1):217–8. [Google Scholar]
  • 144.Smith MD, Foster CL, Stovin G. Intelligence and paranormal belief: Examining the role of context. The Journal of Parapsychology. 1998. Mar 1;62(1):65–78. [Google Scholar]
  • 145.Wechsler D. Wechsler adult intelligence scale. Archives of Clinical Neuropsychology; 1955. doi: 10.1037/h0044391 [DOI] [PubMed] [Google Scholar]
  • 146.Jackson O. Manual for the Multidimensional Aptitude Battery. Port Huron, MI: Research Psychologists Press; 1985. [Google Scholar]
  • 147.Stuart-Hamilton I, Nayak L, Priest L. Intelligence, belief in the paranormal, knowledge of probability and aging. Educational Gerontology. 2006. Mar 1;32(3):173–84. [Google Scholar]
  • 148.Branković M. Who believes in ESP: Cognitive and motivational determinants of the belief in extra-sensory perception. Europe’s journal of psychology. 2019. Feb;15(1):120–39. doi: 10.5964/ejop.v15i1.1689 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 149.Lasikiewicz N. Perceived stress, thinking style, and paranormal belief. Imagination, cognition and personality. 2016. Mar;35(3):306–20. [Google Scholar]
  • 150.Majima Y. Belief in pseudoscience, cognitive style and science literacy. Applied Cognitive Psychology. 2015. Jul;29(4):552–9. [Google Scholar]
  • 151.Svedholm AM, Lindeman M. The separate roles of the reflective mind and involuntary inhibitory control in gatekeeping paranormal beliefs and the underlying intuitive confusions. British Journal of Psychology. 2013. Aug;104(3):303–19. doi: 10.1111/j.2044-8295.2012.02118.x [DOI] [PubMed] [Google Scholar]
  • 152.Genovese JE. Paranormal beliefs, schizotypy, and thinking styles among teachers and future teachers. Personality and Individual Differences. 2005. Jul 1;39(1):93–102. [Google Scholar]
  • 153.Rogers P, Hattersley M, French CC. Gender role orientation, thinking style preference and facets of adult paranormality: A mediation analysis. Consciousness and cognition. 2019. Nov 1;76:102821. doi: 10.1016/j.concog.2019.102821 [DOI] [PubMed] [Google Scholar]
  • 154.Rizeq J, Flora DB, Toplak ME. An examination of the underlying dimensional structure of three domains of contaminated mindware: paranormal beliefs, conspiracy beliefs, and anti-science attitudes. Thinking & Reasoning. 2021. Apr 3;27(2):187–211. [Google Scholar]
  • 155.Norris P, Epstein S. An experiential thinking style: Its facets and relations with objective and subjective criterion measures. Journal of personality. 2011. Oct;79(5):1043–80. doi: 10.1111/j.1467-6494.2011.00718.x [DOI] [PubMed] [Google Scholar]
  • 156.Stanovich KE, West RF. Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology. 1997. Jun;89(2):342–57. [Google Scholar]
  • 157.Sá WC, West RF, Stanovich KE. The domain specificity and generality of belief bias: Searching for a generalizable critical thinking skill. Journal of educational psychology. 1999. Sep;91(3):497. [Google Scholar]
  • 158.Gianotti LR, Mohr C, Pizzagalli D, Lehmann D, Brugger P. Associative processing and paranormal belief. Psychiatry and clinical neurosciences. 2001. Dec;55(6):595–603. doi: 10.1046/j.1440-1819.2001.00911.x [DOI] [PubMed] [Google Scholar]
  • 159.Hergovich A. Field dependence, suggestibility and belief in paranormal phenomena. Personality and Individual Differences. 2003. Feb 1;34(2):195–209. [Google Scholar]
  • 160.Hergovich A, Hörndler H. Gestaltwahrnehmungstest: e. computerbasiertes Verfahren zur Messung d. Feldartikulation; Manual. Swets Test Services; 1994. [Google Scholar]
  • 161.Wilson K, French CC. The relationship between susceptibility to false memories, dissociativity, and paranormal belief and experience. Personality and Individual Differences. 2006. Dec 1;41(8):1493–502. [Google Scholar]
  • 162.Dudley RT. Effect of restriction of working memory on reported paranormal belief. Psychological reports. 1999. Feb;84(1):313–6. doi: 10.2466/pr0.1999.84.1.313 [DOI] [PubMed] [Google Scholar]
  • 163.Lindeman M, Riekki T, Hood BM. Is weaker inhibition associated with supernatural beliefs?. Journal of Cognition and Culture. 2011. Jan 1;11(1–2):231–9. [Google Scholar]
  • 164.Stroop JR. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 1935. Dec;18(6):643–62. [Google Scholar]
  • 165.Wain O, Spinella M. Executive functions in morality, religion, and paranormal beliefs. International Journal of Neuroscience. 2007. Jan 1;117(1):135–46. [DOI] [PubMed] [Google Scholar]
  • 166.Pizzagalli D, Lehmann D, Brugger P. Lateralized direct and indirect semantic priming effects in subjects with paranormal experiences and beliefs. Psychopathology. 2001;34(2):75–80. doi: 10.1159/000049284 [DOI] [PubMed] [Google Scholar]
  • 167.Palmer J, Mohr C, Krummenacher P, Brugger P. Implicit learning of sequential bias in a guessing task: Failure to demonstrate effects of dopamine administration and paranormal belief. Consciousness and cognition. 2007. Jun 1;16(2):498–506. doi: 10.1016/j.concog.2006.12.003 [DOI] [PubMed] [Google Scholar]
  • 168.Irwin HJ, Green MJ. Schizotypal processes and belief in the paranormal: A multidimensional study. European Journal of Parapsychology. 1999;14:1–5. [Google Scholar]
  • 169.Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior research methods. 2018. Jun;50(3):1166–86. doi: 10.3758/s13428-017-0935-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, Sutton AJ, et al. Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC medical research methodology. 2009. Dec;9(1):1–4. doi: 10.1186/1471-2288-9-79 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Hergovich A, Schott R, Burger C. Biased evaluation of abstracts depending on topic and conclusion: Further evidence of a confirmation bias within scientific psychology. Current Psychology. 2010. Sep;29(3):188–209. [Google Scholar]
  • 172.Laws KR. Psychology, replication & beyond. BMC psychology. 2016. Dec;4(1):1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 173.Bösch H, Steinkamp F, Boller E. Examining psychokinesis: The interaction of human intention with random number generators—A meta-analysis. Psychological bulletin. 2006. Jul;132(4):497–523. doi: 10.1037/0033-2909.132.4.497 [DOI] [PubMed] [Google Scholar]
  • 174.Radin D, Nelson R, Dobyns Y, Houtkooper J. Reexamining psychokinesis: comment on Bösch, Steinkamp, and Boller (2006). Psychological bulletin. 2006. Aug; 132(4):529–32. doi: 10.1037/0033-2909.132.4.529 [DOI] [PubMed] [Google Scholar]
  • 175.Irwin HJ, Marks AD, Geiser C. Belief in the Paranormal: A State, or a Trait?. Journal of Parapsychology. 2018. Mar 1;82(1):24–40. [Google Scholar]
  • 176.Diamond A. Executive functions. Annual review of psychology. 2013. Jan 3;64:135–68. doi: 10.1146/annurev-psych-113011-143750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Duncan J. EPS Mid-Career Award 2004: brain mechanisms of attention. The Quarterly Journal of Experimental Psychology. 2006. Jan 1;59(1):2–7. doi: 10.1080/17470210500260674 [DOI] [PubMed] [Google Scholar]
  • 178.Duncan J, Owen AM. Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in neurosciences. 2000. Oct 1;23(10):475–83. doi: 10.1016/s0166-2236(00)01633-7 [DOI] [PubMed] [Google Scholar]
  • 179.Nyberg L, Marklund P, Persson J, Cabeza R, Forkstam C, Petersson KM, et al. Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia. 2003. Jan 1;41(3):371–7. doi: 10.1016/s0028-3932(02)00168-9 [DOI] [PubMed] [Google Scholar]
  • 180.Duncan J. The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends in cognitive sciences. 2010. Apr 1;14(4):172–9. doi: 10.1016/j.tics.2010.01.004 [DOI] [PubMed] [Google Scholar]
  • 181.Greenland S, Robins J. Invited commentary: ecologic studies—biases, misconceptions, and counterexamples. American journal of epidemiology. 1994. Apr 15;139(8):747–60. doi: 10.1093/oxfordjournals.aje.a117069 [DOI] [PubMed] [Google Scholar]
  • 182.Jüni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. Jama. 1999. Sep 15;282(11):1054–60. doi: 10.1001/jama.282.11.1054 [DOI] [PubMed] [Google Scholar]
  • 183.Greenland S, O’rourke K. On the bias produced by quality scores in meta‐analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001. Dec 1;2(4):463–71. doi: 10.1093/biostatistics/2.4.463 [DOI] [PubMed] [Google Scholar]
  • 184.Saher M, Lindeman M. Alternative medicine: A psychological perspective. Personality and individual differences. 2005. Oct 1;39(6):1169–78. [Google Scholar]
  • 185.Lindeman M, Aarnio K. Paranormal beliefs: Their dimensionality and correlates. European Journal of Personality. 2006. Nov;20(7):585–602. [Google Scholar]
  • 186.Ferguson HJ, Brunsdon VE, Bradford EE. The developmental trajectories of executive function from adolescence to old age. Scientific reports. 2021. Jan 14;11(1):1–7. doi: 10.1038/s41598-020-79139-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 187.Emmons CF, Sobal J. Paranormal beliefs: Testing the marginality hypothesis. Sociological Focus. 1981. Jan 1;14(1):49–56. [Google Scholar]
  • 188.Murman DL. The impact of age on cognition. InSeminars in hearing 2015. Aug (Vol. 36, No. 03, pp. 111–121). Thieme Medical Publishers. doi: 10.1055/s-0035-1555115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Salthouse TA, Atkinson TM, Berish DE. Executive functioning as a potential mediator of age-related cognitive decline in normal adults. Journal of experimental psychology: General. 2003. Dec;132(4):566. doi: 10.1037/0096-3445.132.4.566 [DOI] [PubMed] [Google Scholar]
  • 190.Kennedy KM, Raz N. Aging white matter and cognition: differential effects of regional variations in diffusion properties on memory, executive functions, and speed. Neuropsychologia. 2009. Feb 1;47(3):916–27. doi: 10.1016/j.neuropsychologia.2009.01.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 191.Tisserand DJ, Jolles J. On the involvement of prefrontal networks in cognitive ageing. Cortex. 2003. Jan 1;39(4–5):1107–28. doi: 10.1016/s0010-9452(08)70880-3 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

José C Perales

13 Dec 2021

PONE-D-21-32750Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of researchPLOS ONE

Dear Dr. Dean,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands.

The reviewers have made a very careful job, and provide quite a large number of recommendations regarding potential ways in which the manuscript could be improved. Most revisions seem doable, although they might require the paper to be substantially rewritten.

In view of the detailed reports (attached below), I will not reiterate all their points. Main suggestions, however, seem to regard to major themes. First, the reviewers demand a better justification of the categories used to classify studies. The reviewers (and I) understand that there will always be a certain degree of arbitrariness in classifying the studies identified, but the commonality between studies in the same category is not always obvious, and some studies seem to be classifiable in a different category.

And second, all reviewers have found the results quite difficult to follow, and not always sufficiently informative. Section-wise interim conclusions are probably necessary for the reader to get a clearer picture of the results in each area of research.

On the side of strengths, the reviewers have evaluated very positively some aspects of the methodology, including protocol preregistration, strict adherence to PRISMA guidelines, and the careful assessment of evidence quality in the reviewed studies. Please address, however, the reviewers' concerns regarding the decision to exclude studies with adolescents, and the total-score assessment of study quality

Please submit your revised manuscript by Jan 27 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

José C. Perales

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf  and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

4. We note that this manuscript is a systematic review or meta-analysis; our author guidelines therefore require that you use PRISMA guidance to help improve reporting quality of this type of study. Please upload copies of the completed PRISMA checklist as Supporting Information with a file name “PRISMA checklist”.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

Reviewer #3: N/A

Reviewer #4: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

Reviewer #4: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors present a review article on the association between paranormal beliefs and divergencies in cognitive function. The topic has been gaining relevance in the last decades in the research community and is of interest for a wide audience. Although I agree with the authors that a metanalysis would have been more valuable, I reckon that the heterogeneity in the studies published so far hinders that approach. In any case, I believe that a systematic review paper like the one presented will be of use for many researchers interested in this field during the following years.

Nevertheless, I have some concerns I believe merit clarification before recommending publication:

Line 87: I think there is a parenthesis missing (or one too many)

Line 95: I guess that this is a matter of personal preference (so you can ignore it), but I thought the sentences on line 95 to 100 (“While factors…”) were not very relevant for the study (which is already quite long and includes many references) so they could be deleted.

Line 101: While setting the case for the study of cognitive function and paranormal beliefs, the authors comment on studies of “other” kinds of beliefs, such as religious beliefs and conspiracist beliefs. Regarding the former, as far as I know, religious beliefs are, at least sometimes, considered one type of paranormal beliefs (see the Traditional Religious Beliefs dimension in the widely used questionnaire by Tobacyk). I think this should be clarified in the text. As for conspiracist beliefs, the authors could make use of the concept of “epistemically unwarranted beliefs” (see Lobato et al., 2014) to stablish links between these types of beliefs and maybe include some references to pseudoscientific beliefs too.

Line 114: Irwin’s review only included null findings?

Line 133: The aims of the larger study should be (at least briefly) explained.

Line 143: Were any unpublished works or theses included in the final list of studies. If so, this should be clarified through the description of the process and, at least, when discussing the final selection.

(around) Line 177: The “sought for retrieval” step appears in Figure 1, but I think it is not explained in the text.

Line 207: I do not think illusion of control and causal illusion should be categorized as perceptual biases. As far as I know, they are usually characterized as cognitive biases (see Matute, Yarritu and Vadillo, 2011), and they are very different to the other studies included in this section (visual noise studies).

Line 212: Here, I missed inclusion of Torres et al. (2020) (it appeared in a Scopus search using “paranormal belief” AND cogni*)

Lines 221 and 227: In which unit are these quantities expressed and how do they relate with the percentages in line 217

205: How were paranormal beliefs measured in each study?

Lines 287 and 295: Aren’t repetition avoidance tasks a measure of probabilistic reasoning? I think these studies would fit more comfortably in the next section.

Line 370: associations between paranormal beliefs and what?

Line 372: Here, I missed Barberia et al. (2018) (also appeared in a Scopus search).

Line 378: reference 125 is about an association between pseudoscientific beliefs and paranormal beliefs. I am not sure this should be included as a (cognitive) measure of critical thinking given the relation between the two types of beliefs. If the authors decide to keep it, then they should also consider other studies relating different types of unwarranted beliefs with each other (e.g., Lobato et al, 2014; Fasce and Pico, 2019…).

Line 439: In this section I missed Meyersburg et al. (2009) study, but then I couldn’t find it when I tried a search in Scopus so I guess it is ok.

Line 507: when discussing sampling representativeness, I think the sex/gender issue deserves some comment. Were samples composed of both men and women? were they balanced? At least in those studies recruiting Psychology students my guess is that more women would have been involved…

Line 528: I would appreciate a table or at least more information on the percentages of usage of each different test (not just Tobacyk and ad hoc questionnaires). It could be even worth providing information separated by each topic. Could differences regarding the association between critical thinking and paranormal beliefs be related to the use of different measures of belief endorsement?

Line 683: I think the authors should include in the discussion some comment on the implications of the fact that most of the studies analyzed are correlational.

Line 755: Regarding the proposal of the fluid-executive theory, the authors should describe ways in which this hypothesis could be tested.

Reviewer #2: The authors present a systematic review (without meta-analysis) of the literature relating paranormal beliefs with performance in different cognitive tasks, with a particular focus in the critical assessment of the quality of the studies conducted so far. Overall, I think that this can be a valuable resource for researchers working on this area and, hopefully, it will also contribute to improving the quality of future research. I do have some concerns, though, that the authors might want to address in the final version of the manuscript.

Perhaps my most important concern is that in the present version of the ms it is quite difficult to follow the results section. As the authors themselves acknowledge their classification of tasks is somewhat arbitrary because “such classifications are necessarily a simplification and are not intended to be a definitive organisation”. This is undeniable true, but even so some of the headings collate results from radically different tasks and at some points I couldn’t help thinking that some paragraphs actually belonged in a different section. Or, perhaps alternatively, the authors might want to provide some explanation for the logic behind including several tasks under a common heading. This is particularly problematic in the section now titled “Cognitive and perceptual biases”, which includes a wide range of phenomena from illusion of control (measured in learning tasks comprising hundreds of trials) to perception of faces and other stimuli. What’s the common feature underlying these different tasks? Is it the case that all (or most) of them are somehow related to the (illusory) perception of patterns where there is just noise? If that’s the case, I think the authors need to spell this logic out more clearly, possibly change the name of the section and perhaps move some of the tasks that do not fit well in this category to other sections. They might even want to consider dividing this section in two or more less heterogeneous sections.

I also found it slightly odd that the jump to conclusions task is included in this section, when it is actually more similar to some of the probabilistic reasoning tasks included in the following section.

Also, in the present version it is easy to get lost in the enumeration of Results, without getting a glimpse at the whole picture until the Discussion section. If the results section were briefer this would be ok, but given the length of the paper I think many readers would find it useful to be reminded occasionally of the interim conclusions that can be reached with the information provided in each subheading. In other words, I miss 1-2 concluding sentences providing an overview of the results found for each category of tasks.

The assessment of the quality of the studies plays a very important role in the manuscript and it is indeed a great contribution. But while reading it I had some concerns about the reduction of this information to a single “quality score” for each study. Although this is certainly common in meta-analytic literature, this approach has also been criticized, and rightly so in my opinion (e.g., Jüni, P., Witschi, A., Bloch, R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for meta-analysis. JAMA, 282, 1054–1060.) I would not ask the authors to change their approach, but I think it would be worth mentioning, even if briefly, that reducing the responses to these quality scales to a single score can be misleading and should be taken with caution.

In the introduction, the authors mention that the two most common tests to measure paranormal beliefs have good psychometric properties. But this is just one side of the story. These studies are trying to relate paranormal beliefs with performance in cognitive tasks that might not have such good psychometric properties. See, e.g., Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual dif- ferences. Behavior Research Methods, 50, 1166–1186. https://doi. org/10.3758/s13428-017-0935-1 This is important, because the observed correlation between any two measures is attenuated downwards if any of them (not just the paranormal belief scale) is unreliable. If the reliability of these tasks does not improve in future research (or sample sizes are not adjusted taking it into account) this will necessarily result in high heterogeneity (i.e., much more variance from one study to another) and small average effect sizes.

Regarding sample sizes, I was a bit puzzled to read that “overpower” can be a problem in this area. I am afraid I strongly disagree with this point of view. In my opinion, there cannot be such a thing as an excess of power. The authors justify this saying that “… large studies might also be over-powered and thus, detecting very small and possibly trivial effects”. This confounds hypothesis testing (whether an effect is significantly different from zero) with parameter estimation (what the exact size of an effect). The fact that many researchers take a significant result for a “relevant” result is the consequence of using statistical tests mindlessly and it would be an error to encourage researchers to use smaller-than-possible sample sizes as a solution. Regardless of whether you want to test the null hypothesis or know the exact size of an effect, a large sample size will always be helpful because it will reduce the uncertainty of your inferences. What researchers need to be reminded is that not everything that is statistically significant is important.

Minor comments

- If the goal is to represent a trend in time, I suspect that most readers would find Figure 5 easier to read if years are plotted in the x-axis.

- In the flow chart, I couldn’t understand how from 475 records screened you take 5 reports for retrieval, but then below you have again 84 reports originating from these 5. Something seems to be wrong or misleading in the flow of records.

Reviewer #3: This paper provides a comprehensive and thorough review of research into the relationship between paranormal beliefs and cognitive functioning. As the authors note, considerable time has passed since the last review of paranormal beliefs and cognition, so this paper makes a strong, important, and timely (if not long overdue) contribution to the literature. I also commend the authors for preregistering their PRISMA guidelines and for the level of detail and clarity they provide about how the systematic review was conducted. Overall, I think this paper will make a great contribution to the literature. However, there are some areas where I believe the paper could be improved which I have highlighted in detail below.

Major points:

1) Regarding the paper structure, I think it might be better to more clearly separate the study quality assessment results and discussion from the paranormal belief and cognition findings and theory results and discussion. The Introduction focuses on the key points, is clear and easy to follow, and provides an appropriate (broad) set up for the focus of the paper. Similarly, the Method section is clear, concise, and enjoyable to read. However, from the Results onwards the paper can be quite difficult to follow in places. For example, it goes from the Method to an outline of findings in the literature, then to sections on study quality assessment, then back to a discussion of cognitive deficits/differences, then onto open science (I think this section should be moved to be alongside the sampling issues and non-respondents), then back onto a summary of research findings and a proposal for a new theory etc.

This review paper is doing two things. Firstly, extracting and assessing the quality of the existing literature on paranormal beliefs and cognitive function. Secondly, it is also outlining and synthesising the findings from that literature (plus proposing a new theory/hypothesis for testing). These are two separate and quite distinct focuses. Therefore, I think it would be better to more clearly delineate them and instead present all of one aspect (quality assessment and relevant discussion) followed by everything relevant to the other (outline and synthesis of findings).

2) Related to the above point, the results generally provide a clear and comprehensive summary of the various findings that have been catalogued within each subsection. However, I think each subsection would benefit from an overall summary or synthesis that brings it all together. If you make the changes to the paper structure that I have recommended above, this may no longer be necessary because these results will be more closely followed by a relevant discussion section (but see how it looks and consider it). In comparison, the results for the quality assessment are accompanied by relevant discussion within the actual results section.

3) I think the proposed fluid-executive theory is underdeveloped and would benefit from some further explanation and elaboration. You explain how it would relate to probabilistic reasoning but don’t outline how it would contribute to or explain the other findings (e.g., cognitive and perceptual biases).

Additionally, I think you need to do more work to justify why this specific aspect should be focused on, rather than alternative explanations. For example, others might argue that analytical thinking (Pennycook et al., 2015), or a “rationality quotient” (Stanovich, 2016; Stanovich et al., 2016; Weller, 2017; although see Ritchie, 2017), could also be proposed as underlying (or overarching) theories that explain the various associations between paranormal belief and cognitive functioning. I am happy to be convinced that the proposed theory is the best/most plausible candidate, I just think it needs some further fleshing out and additional evidence to support it.

Minor points:

1) I understand the desire to not include studies on children, given the potential cognitive differences. However, from examining Table S1, it seems that for all the excluded studies the vast majority of participants included in the studies were adults and they just happened to also include some teenagers in the study. I think it would be justifiable to exclude studies that had solely focused on children or teenage samples but given the already wide variability in cognitive function between 18- and 70-year olds, it seems unnecessary to exclude these studies solely because they also include some participants in their mid-late teens. I know this is a deviation from your preregistration so feel free to push back or disagree, but it wouldn’t change your conclusions and I think it’s okay to make some deviations if they are well justified.

2) Is the discussion of test theories and differential item functioning in the Introduction necessary? It seems like an unnecessary distraction from the main focus of the paper, so I’d just leave it at a sentence or two explaining that there is debate about why these differences are found.

3) I think that Figures 2-4 for the AXIS data would be greatly improved if you also included No and Unsure (you could keep the current format but have the bars contain different colours for each response category). This is particularly important because when initially looking at the figures it is not clear that there was an “unsure” category (e.g., it looks like half the studies didn’t have ethical approval, when it likely just wasn’t explicitly reported).

4) The section on open science focuses solely on pre-registration but there are many other aspects of open science such as publicly sharing data, analysis scripts, materials etc. I think you should either broaden this section to cover those additional open science aspects or, if you want to avoid lengthening the paper, then you could combine the pre-registration section with the sample size justification section, presenting it as a possible solution to address these problems.

5) The data are not currently accessible via the OSF link (https://osf.io/7bthg/). Please update the OSF page to make it public or provide a reviewer only link if you don’t want to make the data fully open to the public yet. Don’t worry, I’ve done the same thing before and this happens with half or more of the OSF links I’ve seen when reviewing papers.

References mentioned in the review that are not already in the paper:

Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). Everyday consequences of analytic thinking. Current Directions in Psychological Science, 24(6), 425–432. https://doi.org/10.1177/0963721415604610

Ritchie, S. (2017). Review of: The rationality quotient: Toward a test of rational thinking (K. E. Stanovich, R. F. West, & M. E. Toplak). Intelligence, 61, 46. https://doi.org/10.1016/j.intell.2017.01.001

Stanovich, K. E. (2016). The comprehensive assessment of rational thinking. Educational Psychologist, 51(1), 23–34. https://doi.org/10.1080/00461520.2015.1125787

Stanovich, K. E., West, R. F., & Toplak, M. E. (2016). The Rationality Quotient: Toward a Test of Rational Thinking. The MIT Press. https://doi.org/10.7551/mitpress/10319.001.0001

Weller, J. (2017). Review of: The rationality quotient toward a test of rational thinking, by Keith E. Stanovich, Richard F. West, and Maggie E. Toplak. Thinking & Reasoning, 23(4), 497–502. https://doi.org/10.1080/13546783.2017.1346521

Reviewer #4: Overall Evaluation:

The paper presents a review and summary of the past 40 years of research on paranormal belief and cognitive functioning. As noted by the authors, there has not been a systematic review of this relationship since Irwin’s (1993) work almost 30 years ago. I wholeheartedly agree with the authors that such a systematic review is needed, and that it would add significantly to our overall understanding of the current state of the field. Unfortunately, though, I think there are some key issues with the current attempt that need to be addressed to turn it into a beneficial contribution to the area.

Comments:

1. Given the range of tasks and variables in this particular area, I have no doubt it was difficult to synthesize the information in a straightforward and simple manner. However, even though I work in this area, I found it hard to track through the main sections. Specifically, each section was a listing of how one study showed X, two studies showed Y, et cetera, and by the end of each section it was not clear what specifically the reader should take away. At a minimum, using something like clear and specific tables to help organize the material would help immensely, especially in terms of trying to track through what the various studies do or do not show. There are the tables in the supplementary material, and admittedly even though they are broken up by section and a bit tricky to see “overall” outcomes, I found them easier to follow in terms of thinking across the studies.

Relatedly, in several spots the writing/presentation was dense, which may have added to the experience of not knowing what the “take home” message was for each section. For example, proper paragraphs should rarely run over a page, but more than one did, and one paragraph actually went for almost a full 2 pages (pgs 11-13). In general, editing for direct language, paragraph length, et cetera, would help improve clarity of the information being presented.

2. Again, I understand it would be difficult to categorize the experiments, but the current way of doing it seems to actually work against providing a systematic review. For example, there were fewer studies than I would have expected in the thinking styles section, just because this has been a particularly popular topic to explore in terms of paranormal beliefs. I could see, though, how some of that work would have ended up in other sections given the classification criteria; however, it then feels like we are not getting the full picture. Again, I think this is where tables may be particularly useful; for example, rather than binning studies under just one section, it would be much more useful to have tables that include all of the categories. Thus, we would be able to see what each study contributes across the categories (when relevant), rather than to just a single category. I recognize the authors may have attempted this approach and for some reason it was not viable, but based on systematic reviews in other areas that have used this set-up it would seem to be the more useful approach. This type of set-up would also help more clearly and succinctly demonstrate what the reader should take away from the area.

3. The limitations discussion seems like a bit of a tacked-on section rather than a real consideration of the current work. For example, as already mentioned, one potential issue is how the studies had to be categorized, which means they can only contribute to one section even though they may potentially also be able to contribute to at least one other section.

Further, and this may be an unfair criticism given the complexity of the area, but I had fully expected at least some sort of meta-analysis of the studies. Again, this may be because the listing out of the studies across the sections did not land well in terms of what to concretely take away. However, given the current techniques available for meta-analyses it feels like that is what we would gain the most from in terms of understanding the current state of the relationships between paranormal belief and cognitive functioning.

4. There seems to be some sweeping generalizations made that are not necessarily an accurate representation of the area. For example, there’s a difference between a cognitive deficits hypothesis suggesting “paranormal believers are illogical, irrational, and uncritical,” and what the researchers of each study argued for as the hypothesis/explanation for their work. Sure, some likely would subscribe to this particularly spin on cognitive deficits, but looking for “more or less” of a skill/ability does not necessarily mean researchers in this area would agree that believers are “illogical” or “irrational”.

5. A minor point, but was any effort put into searching for articles using the term “anomalistic”? The term “paranormal” is still the most common terminology, but given Chris French and colleague’s focus on using the more broad term of anomalistic belief (and subsequent work from others that has followed suit), it is not clear whether just using “paranormal” would have picked up all of the relevant studies.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes: Toby Prike

Reviewer #4: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 4;17(5):e0267360. doi: 10.1371/journal.pone.0267360.r002

Author response to Decision Letter 0


31 Jan 2022

We would like to thank the reviewers for their careful review and the insightful and detailed comments they provided.

We have addressed and responded to the reviewers' comments, details of which can be found in the 12-page Response to Reviewers document.

We have addressed the main points raised, particularly the clarity of the results and the classification of studies, by providing new sections within the manuscript (e.g., summaries following each subsection of the results) as well as new supplementary materials (e.g., S9 Table).

We would like to thank the editor and reviewers again for the valuable comments, which we feel have greatly improved the manuscript, and for the opportunity to revise and resubmit the work.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

José C Perales

8 Mar 2022

PONE-D-21-32750R1Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of researchPLOS ONE

Dear Dr. Dean,

Thank you for submitting your manuscript to PLOS ONE. The revised paper has been assessed by the same four reviewers from the previous round. All of them recommend minor revisions, but they are not fully coincident, so some amount of work is still required. Still, all suggested changes are modest and doable, so a further review round with the four reviewers again might not be necessary if all concerns are addressed.

Please submit your revised manuscript by Apr 22 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

José C. Perales

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

Reviewer #4: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

Reviewer #3: Yes

Reviewer #4: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I am generally satisfied with the responses offered by the authors to my previous comments. Now I have some minor concerns, mostly regarding the PRISMA protocols, which I describe in the following:

PRISMA for abstracts:

- The Methods section should specify the inclusion and exclusion criteria for the review, and the dates when each database was last searched. It should also specify the methods used to assess risk of bias in the included studies.

- The Results section should indicate the number of included studies and participants for each relevant outcome mentioned (e.g., association between paranormal belief and intuitive thinking bias).

- The primary source of funding for the review should be specified.

PRISMA checklist (manuscript)

- Item 5: the Methods section should specify how studies were grouped for the synthesis

- Item 10: the Methods section should specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.

- Item 11/12/14: If I understand it correctly, description of methods for assessment of risk of bias should be presented in the Methods section. Now they are described in the Results section.

Item 13/15: I could not identify descriptions corresponding to these items in the Methods section.

- Item 27: Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review.

-----

I like the proposal of the fluid-executive model, but isn’t the general idea that “it is possible to view the association between the many cognitive deficit-paranormal belief associations as the product of a single underlying fluid intelligence-executive component” in conflict with the fact that findings presented in the intelligence section “are highly conflicting, with an almost an equal number of significant versus non-significant findings”?

-----

I think S1 Appendix is not referred in the manuscript

Line 97: parenthesis missing

Line 420: I think the role of the beads task in Prike et al.’s study in unclear. In addition, this task is not explained until the next paragraph. Maybe changing the order of those two paragraphs (or explaining the task the first time it is mentioned) would help the reader to understand its relevance.

Line 549: revise “statements”

Line 568: revise “be”

Line 692: I think “such” is not appropriate there given we have just started a new section

Reviewer #2: The authors have done an excellent job at addressing my concerns with the previous version of the manuscript. I only have a few minor comments:

- Line 5, there is a parenthesis missing at the end of the line.

- Lines 215-216. Please say explicitly what you mean by large, moderate and small effects.

- Page 12, first paragraph: any study published as a registered report? This would be interesting because RRs do not only limit p-hacking, they also ensure that that there is no publication bias (papers are accepted or rejected before results are known).

- Lines 412-414. “Paranormal believers showed a lower perceptual sensitivity compared to sceptics (a bias towards making more ‘yes responses…” Sensitivity and bias are completely different things (e.g., in signal detection theory analysis). Please, clarify whether believers differ in one or the other.

- Line 556 “… conducted by similar research teams”. Similar in what sense?

Reviewer #3: The authors have been very receptive to the comments made in the previous round of reviews and the revised manuscript is much improved. I would like to again highlight that this review of paranormal beliefs and cognition makes a strong and timely contribution to the literature. I have highlighted a few minor points below which I believe can easily be addressed.

Minor points:

1) The sentence on page 13, lines 263-266, discusses differences between studies with student and non-student samples but the analyses reported do not find significant differences. Please make the lack of significant differences clear and adjust/remove the related discussion.

2) When discussing the conjunction fallacy (pages 22 and 25), in addition to Rogers et al., Prike et al. (2017) also found a significant relationship between the conjunction fallacy and paranormal belief. Additionally, the differences between the studies may be due to differences in analysis techniques used. In the Dagnall et al. studies, all the probabilistic reasoning tasks were entered together as predictors, whereas Rogers et al. and Prike et al. looked at the relationship between the conjunction fallacy and paranormal belief without entering/controlling for these other probabilistic reasoning tasks, which may explain the differences in results. This doesn’t need much discussion but may be worth noting or mentioning.

3) The section on page 31, lines 701-705, is unnecessarily repetitive and makes the same point multiple times (generally study quality is good but there are some specific areas for improvement).

4) On page 32, line 731, the text says “Eight” but the parentheses say “(9/71)”.

Reviewer #4: Overall Evaluation:

The authors did a good job addressing the issues raised by all of the reviewers, and overall I do think those revisions make for a much clearer and easier-to-follow narrative, and thus a stronger manuscript. I also appreciated the additional data/info included, such as giving a clear overview of the alternate categories in S9. I do still have a few comments, but given the focus of the paper I do not think any of them are major issues, and I also understand their reasoning for some of the issues they chose not to address with changes in the manuscript (e.g., as much as I would love to see some meta-analyses stats, I do understand the authors’ reasons for choosing not to go that route).

Comments:

1. On pg 13 (lines 263-266) claims are made that aren’t supported by the provided statistics. So either those statistics are incorrect (or I am misunderstanding what is being reported), or the wording needs to be changed. That is it cannot be said that the undergrad studies tended to have smaller samples and lower quality; however, it could be said that descriptively there looks to be a difference but there is no statistical evidence that there is indeed a difference.

2. I appreciate the expanded limitations section, but still think there are issues with it. First, some of the points are underdeveloped; for example, the discussion about weighting vs. summing on the AXIS needs to be unpacked with even just 1-2 more sentences to be clear what specifically the issue may be (pg 120, lines 863-867). Admittedly I am a bit old-school and still believe in the rule that proper paragraphs are a minimum of 3 sentences, but here I am saying it because the 2 sentences that are there do not follow-through the point, especially for readers who may not be familiar with the specific issue.

Further, I expected the issue of categorization to be included in the limitations, so was quite surprised when I saw it was not there. It is mentioned earlier in the manuscript, which was good, but the alternate categories table (S9) shows that it warrants further discussion in terms of limitations. That is, the table very nicely shows just how many potential categories the majority of studies could fall into. So I want to be clear that I am not trying to argue that the review be redone with the studies being included in all possible categories because I understand the author’s justification for the way they decided to do the categorization. However, I do think more consideration is needed for how this effects what we can conclude for each section (i.e., given how many “secondary focus” studies had to be excluded from consideration because they were used elsewhere).

3. I think the summary sections are quite useful, and go a long way to helping to track through what each section is trying to convey. However, that (along with other additions) does mean that overall the manuscript is a bit daunting to get through, and I think in general it feels like it has a lot of repetition in some spots. For example, the start of the General Discussion is mainly just repetition (including all the numbers again) of what has already been presented. A lot of that could be deleted or streamlined into main points that move beyond what has already been said in earlier sections. And I would also argue focusing on my direct and active writing would cut out a lot of wordiness and also help the manuscript feel more streamlined and manageable (including cutting down on the large amount of information given in parentheses).

4. There is an issue with the wording of the sentence on pg 88, lines 99-102. The first half is hard to follow due to the two “was” near each other, and I think at least one of those needs to be reworked to help clarify what is being said.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: Yes: Toby Prike

Reviewer #4: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 4;17(5):e0267360. doi: 10.1371/journal.pone.0267360.r004

Author response to Decision Letter 1


4 Apr 2022

We would like to thank both the editor and reviewers for their detailed comments, and for the opportunity to submit a re-revised version of the manuscript. A point-by-point response to the reviewers' comments can be found in the attached 'Response to Reviewers (2)' file. We have made edits throughout the manuscript to improve both the length and clarity of the manuscript and the reviewers' advice, which we feel have greatly benefitted the manuscript. We would like to thank the editor and reviewers again for the time they have committed to reviewing this manuscript.

Attachment

Submitted filename: Response to Reviewers (2).docx

Decision Letter 2

José C Perales

7 Apr 2022

Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research

PONE-D-21-32750R2

Dear Dr. Dean,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

José C. Perales

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

José C Perales

12 Apr 2022

PONE-D-21-32750R2

Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research

Dear Dr. Dean:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. José C. Perales

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. PRISMA abstract checklist.

    (DOCX)

    S2 Appendix. PRISMA checklist.

    (DOCX)

    S1 Table. Papers excluded from the review (participants < 18).

    Note: Ts = Thinking Style, CPb = Cognitive and Perceptual Biases, O = Other Cognitive Functions, REI = Rational and Experiential Inventory (Epstein et al., 1996), SJQ = Scenario Judgements Questionnaire (Rogers et al., 2016; Rogers et al., 2011), IPO-RT = Inventory of Personality Organization (Lenzenweger et al., 2001), RT = reality testing, ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), ESP = extrasensory perception, LAD = life after death, PK = psychokinesis, NAP = new age philosophy, TPB = traditional paranormal beliefs, RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004; Lange et al., 2000), CKCS = Core Knowledge Confusions scale (Lindeman & Aarnio, 2007; Lindeman et al., 2008), CRT = Cognitive Reflection Test (Frederick, 2005), BRC = base-rate conflict, BRN = base-rate neutral, SREIT = Self-Report Emotional Intelligence Test (Schutte et al., 1998), WCQ = Ways of Coping Questionnaire (Folkman & Lazarus, 1988), IBI = Irrational Beliefs Inventory (Koopmans et al., 1994).

    (DOCX)

    S2 Table. Studies included in the systematic review concerning perceptual and cognitive biases.

    Note: / = information not reported, P = perceptual biases, C = cognitive biases, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, BADE = bias against disconfirmatory evidence, BACE = bias against confirmatory evidence, TRB = traditional religious beliefs, ELF = extraordinary lifeforms, PRI = Personal Risk Inventory (Hockey et al., 2000), SFQ = Strange-Face Questionnaire (Caputo, 2015), IDAQ = Individual Differences in Anthropomorphism Quotient (Waytz et al., 2010), DS = Dualism Scale (Stanovich, 1989), EQ = Empathy Quotient (Baron-Cohen & Wheelwright, 2004).

    (DOCX)

    S3 Table. Studies included in the systematic review concerning reasoning.

    Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, PK = psychokinesis, LAD = life after death, NAP = new age philosophy, DR = deductive reasoning, RTQ = Reasoning Task Questionnaire (Blackmore & Troscianko, 1985), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004), MMU-N = Manchester Metropolitan University New (Dagnall et al., 2010).

    (DOCX)

    S4 Table. Studies included in the systematic review concerning intelligence, critical thinking, and academic performance.

    Note: / = information not reported, C = cognitive ability, I = intelligence, m = males, f = females, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ATS = Assessment of Thinking Skills (Wesp & Montgomery, 1998), WGCTA-S = Watson-Glaser Critical Thinking Appraisal Form S (Watson & Glaser, 1994), WGCTA = Watson-Glaser Critical Thinking Appraisal (Watson & Glaser, 2002; Watson & Glaser, 1980; Watson & Glaser, 1964), RPM = Raven’s Progressive Matrices (Raven et al., 2000), RPM Rasch Model = Raven’s Progressive Matrices Rasch Model (Rasch, 1960), MHVT = Mill Hill Vocabulary Test (Raven et al., 1998), CCTT = Cornell Critical Thinking Test (Ennis & Millman, 1985), WMT = Wiener Matrizen Test (Formann & Piswanger, 1979), APM = Advanced Progressive Matrices (Raven, 1976), WAIS-IS = Wechsler Adult Intelligence Scale Information Subtest (Wechsler, 1955), GPA = Grade Point Average.

    (DOCX)

    S5 Table. Studies included in the systematic review concerning thinking style.

    Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, AOT = Actively Open-Minded Thinking Scale (Stanovich et al., 2016; Stanovich, 1999), CRT = Cognitive Reflection Test (Frederick, 2005), CRT-2 = Cognitive Reflection Test-2 (Thompson & Oppenheimer, 2016), REI = Rational-Experiential Inventory (Pacini & Epstein, 1999), WST = WordSum Test (Huang & Hauser, 1998), RI = Rational/Experiential Inventory (Norris & Epstein, 2011), IPSI-SF = Information-Processing Style Inventory Short Form (Naito et al., 2004), FIS = Faith in Intuition Scale (Pacini & Epstein 1999), NFC = Need for Cognition scale (Cacioppo et al., 1984), AET = Argument Evaluation Test (Stanovich & West, 1997), 10-Item REI = 10-Item Rational-Experiential Inventory (Epstein et al., 1996), GWT = Gestaltwahrnehmungs Test (Hergovich & Hörndler, 1994), EFT = Embedded Figures Test (Witkin et al., 1971).

    (DOCX)

    S6 Table. Studies included in the systematic review concerning executive function and memory.

    Note: / = information not reported, M = memory, EF = executive function, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, DRM = Deese-Roediger-McDermott (Roediger & McDermott, 1995), CRT = Criterial Recollection Task (Gallo, 2013), IIT = Imagination Inflation Task (Garry et al., 1996), RSPAN = Reading-Span Task (Daneman & Carpenter, 1980), OSPAN = Operation Span Task (Turner & Engle, 1989), SILS = Shipley Institute of Living Scale (Zachary, 1986), AET = Argument Evaluation Task (Stanovich & West, 1997), RAT = Remote Associations Test (Mednick, 1962), WCST = Wisconsin Card Sorting Test (Berg, 1948; Grant & Berg, 1948), EFI = Executive Function Index (Spinella, 2005), ANP = anomalous natural phenomena, TRB = traditional religious beliefs, NCQ = News Coverage Questionnaire (Wilson & French, 2006), ASGS = Australian Sheep-Goat Scale (Thalbourne 1995; Thalbourne & Delin, 1993), AEI = Anomalous Experiences Inventory (Kumar et al., 1994).

    (DOCX)

    S7 Table. Studies included in the review concerning other cognitive functions.

    Note: / = information not reported, bl = believers, sc = sceptics, f = females, m = males, ISL = implicit sequence learning, ISP = implicit semantic priming, VF = visual field, LVF = left visual field, RVF = right visual field, CME = central monitoring efficiency, RE = reasoning errors, CC = cognitive complexity, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, SPQ-B = Schizotypal Personality Questionnaire Brief (Raine & Benishay, 1995), RCRG = Role Construct Repertory Grid (Kelly, 1955).

    (DOCX)

    S8 Table. Measures of paranormal beliefs used in the 71 studies included in the review.

    Note:† = papers that provided reliability statistics for their novel scales, ‡ = used a translated version of the original scale, * = Musch & Ehrenberg (2002) developed a novel scale that was later named the BPS and was used in two subsequent studies. RPBS = Revised Paranormal Belief Scale (Tobacyk 1988; 2004), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), PBS = Paranormal Belief Scale (Tobacyk & Milford, 1982), Rasch RPBS = Rasch devised Revised Paranormal Belief Scale (Lange et al., 2000), BPS-O = Belief in the Paranormal Scale (Original; Jones et al., 1977), BPS = Belief in the Paranormal Scale (Musch & Ehrenberg, 2002), MMU-N = Manchester Metropolitan University New (see Dagnall et al., 2010), MMU-PS = Manchester Metropolitan University Paranormal Scale (see Dagnall et al., 2010), SSUB = Survery of Scientifically Unsubstantiated Beliefs (Irwin & Marks, 2013), OS = Occultism Scale (Böttinger, 1976), PS = Paranormal Scale (Orenstein, 2002), AEI = Anomalous Experiences Inventory (Gallagher et al., 1994; includes a ‘belief’ subscale).

    (DOCX)

    S9 Table. Alternate categorisations for studies included in the review.

    Note: = original category, ✓ = alternate category.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers (2).docx

    Data Availability Statement

    All data files relating to the quality assessment are available from the OSF repository (https://osf.io/7bthg/). Data relating to the 71 reviewed studies can be found within the paper's Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES