Highlights
-
•
Many papers cited “ten simple rules for neuroimaging meta-analysis”.
-
•
7.8% (63/804) of the quotations were erroneous.
-
•
Quotation errors involved 13.3% (51/384) of the citing papers.
-
•
Researchers most frequently quoted about the power of the meta-analysis.
-
•
It was followed by the consistency of the search coverage and reference space.
-
•
The third most quoted rule was about statistical threshold.
Keywords: Meta-analysis, Guidelines, Neuroimaging, fMRI, Quotation accuracy, Quotation error, Citation analysis
Abstract
The collection of recommendations and guidelines for conducting and reporting neuroimaging meta-analyses, called “Ten simple rules for neuroimaging meta-analysis” by Müller et al., has been published for a few years. Here, the papers citing this reference were examined to evaluate the rationale of the quotations and what quotation errors existed. In May 2023, an online query via Scopus identified 386 papers citing this reference, 2 of which were inaccessible. The resultant 384 papers were checked to identify the total number of quotations to the reference, the exact quotations, which of the ten recommendations/rules was concerned by each quotation, and if any quotation error existed. Results found that the reference by Müller et al. were quoted 804 times by the 384 papers, meaning an average of 2.1 quotations per paper. Out of the 804 quotations, the three rules that the researchers most frequently referred were the power of the meta-analysis (Rule #2, 14.1%), the consistency of the search coverage and reference space (Rule #4, 13.8%), and the statistical threshold (Rule #8, 10.2%). Overall, 63 quotations from 51 papers contained some errors. In other words, 7.8% (63/804) of the quotations contained errors and they involved 13.3% (51/384) of the papers. The commonest quotation errors were dealing with a failure to substantiate the assertion, unrelated to the assertion, and oversimplification of the original notion. Some notable quotation error examples were to quote Müller et al. to substantiate the assertion of having at least 10 datasets to be considered to have adequate power for ES-SDM meta-analysis (no such recommendation), and having a misquoted primary cluster-forming threshold of p < 0.05 or p < 0.005 (should be p < 0.001). The neuroscience community should be cautious and double-check the accuracy of assertions, even with a quotation.
1. Introduction
The neuroscience research community has been questioning and debating on the reproducibility of research findings due to inadequate statistical power and flexibility of data analytics (Button et al., 2013, Carp, 2012a, Carp, 2012b, Eklund et al., 2016, Nord et al., 2017, Poldrack and Mumford, 2009, Vul et al., 2009). In fMRI studies, for example, voxels or clusters of voxels might reach statistical significance more easily if the search volume was reduced from the whole brain to a pre-defined region-of-interest (ROI) justified by prior studies. Such ROI analyses or small volume correction (SVC) analyses could be potentially controversial, as researchers often adopted a more liberal threshold for these analyses based on the assumption that activations or significant results should be more likely to be found in these predefined, justifiable regions. This assumption might not be always correct, whereas the choice of the “liberalized” statistical threshold might not be optimized in consideration of the expected effect size and the sample size. Conducting replication studies could overcome this issue to test the replicability of prior studies with a different sample or experimental settings. Unfortunately, not all neuroscience journals welcomed replication studies, which might deter researchers from publishing or conducting them (Yeung, 2017). As a result, some fields of neuroscience had few replication studies (Yeung, 2019). To circumvent this issue, one feasible solution to test or identify robust results across studies is to aggregate results from individual neuroimaging studies to conduct a coordinate-based meta-analysis (CBMA), predominantly in the approaches of activation likelihood estimation (ALE) (Fox et al., 2014, Fox et al., 1998, Laird et al., 2005, Turkeltaub et al., 2002) or effect-size seed-based d mapping (ES-SDM) (Albajes-Eizagirre et al., 2019, Radua and Mataix-Cols, 2009, Radua et al., 2012).
Reporting guidelines have been developed to standardize the design and reporting of neuroimaging articles. A few years ago, the “Ten simple rules for neuroimaging meta-analysis” was published with recommendations and guidelines for conducting and reporting neuroimaging meta-analyses (Müller et al., 2018). It was unclear how this guideline was quoted, e.g. which of the ten simple rules were referred more frequently? Also, it was unclear how this guideline was misquoted. Intuitively, one would assume that when a reference was cited after a sentence, the reference would be the source of the information contained in that sentence, and that both the reference and the sentence should be accurate. However, prior studies have shown that this might not be always true. For instance, the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) was designed as a guideline for reporting of observational studies, but some original papers cited it as a guideline for design and conducting of the study, and some systematic reviews cited it as a tool to assess the methodological quality of original papers (Da Costa et al., 2011). Another example was a critical commentary on the Newcastle-Ottawa scale that criticized the use of the scale, but most citing papers quoted it as a reference to support the use of the scale (Stang et al., 2018).
In this work, papers citing (Müller et al., 2018) were examined to evaluate the rationale of the quotations and what quotation errors existed. Readers may think that a citation refers to an occasion when a paper (cited reference) is placed into the reference list of a citing paper, but a cited reference can actually be referred to (or “cited”) multiple times within a citing paper. Hence, to avoid confusion, this work uses the term quotation. Results should enable readers and the neuroscience community to have an overall impression on which recommendations were deemed more relevant to the researchers, and if the recommendations were understood by the researchers appropriately without distorted meaning.
2. Methods
2.1. Literature search strategy
The Scopus online database were queried on 18 May 2023. First, the paper of (Müller et al., 2018) was identified. The database showed that it was cited by 386 documents. Then, the records of all 386 documents were exported. A manual check was performed to confirm that no duplicate existed among these 386 documents. The author had no full access to 2 of them, and hence a total of 384 documents were evaluated. A data extraction Excel sheet was prepared to record the following parameters for each of the 384 documents: (i) Total number of quotations to Müller’s paper; (ii) The exact quotation; (iii) Which of the ten recommendations/rules was concerned by each quotation; and (iv) Any quotation error (Yes/No). Data extraction was performed manually. The locations of the quotations were not recorded. The ten simple rules from (Müller et al., 2018) are listed here for readers’ reference (Table 1).
Table 1.
The ten simple rules from (Müller et al., 2018).
| Rule | Context |
|---|---|
| #1 | “Be specific about your research question”. It dealt with the inclusion/exclusion criteria, tasks and subject groups to be selected, etc. |
| #2 | “Consider the power of the meta-analysis”. It dealt with the power, e.g. the recommendation of at least 17–20 experiments for ALE meta-analysis. Important explanatory text: “Of course, this can only been seen as rough recommendation… in cases where a strong effect is expected, smaller sample sizes may be sufficient.” |
| #3 | “Collect and organize your data”. It dealt with the literature search procedures, data selection procedures, whether the PRISMA guideline was followed, etc. |
| #4 | “Ensure that all included experiments use the same search coverage and identify and adjust differences in reference space”. It dealt with the general recommendation that ROI and SVC analyses should be excluded in most cases, and that all extracted coordinates should be converted into a single standard space. Important explanatory text: “SVC analyses may be potentially included if peaks in the regions liberally thresholded are discarded unless they meet the statistical threshold used in the rest of the brain.” |
| #5 | “Adjust for multiple contrasts”. It dealt with pooling coordinates from multiple contrasts from the same group of subjects, or selecting the most representative contrast, etc. |
| #6 | “Double check your data and report how you did it”. It dealt with double checking the data and its reporting, e.g. having 2 investigators to double check the coordinates, or using automated extraction via Neurosynth or BrainMap, etc. |
| #7 | “Plan the analyses beforehand and consider registering your study protocol”. It dealt with protocol pre-registration, such as to the PROSPERO database. |
| #8 | “Find a balance between sensitivity and susceptibility to false positives”. It dealt with the statistical threshold of the meta-analysis, e.g. for ALE, a voxel-level cluster forming threshold of uncorrected p < 0.001 and a cluster-level threshold of FWE-corrected p < 0.05 is recommended; and for ES-SDM, an uncorrected threshold of p = 0.005 with a cluster extent of 10 voxels and SDM-Z > 1 is recommended. |
| #9 | “Show diagnostics”. It dealt with post-hoc diagnostics of the analyses, e.g. funnel plots, I2 and meta-regressions, % of contributing experiments, jackknife analyses |
| #10 | “Be transparent in reporting”. It dealt with reporting transparency, e.g. using a flow-chart to illustrate the analytic steps, disclosing the number of papers and experiments clearly, the list of the extract contrasts used, submitting the resultant maps and coordinates to open depositories such as ANIMA, Neurovault, BrainMap, etc. |
Subsequently, the classification of quotation error used by Mogull (Mogull, 2017) was used to code for the errors. In brief, the quotation errors were coded into 5 classes: (1) Müller et al.’s paper failed to substantiate the assertion; (2) Müller et al.’s paper was unrelated to the assertion; (3) Müller et al.’s rules contradicted the assertion; (4) The assertion oversimplified or generalized Müller’s rules; and (5) The quotation contained trivial inaccuracies, e.g. patient number or some percentages. Classes 1–3 were classified as major errors, whereas classes 4–5 were minor errors.
3. Results
3.1. General information
The 384 papers collectively quoted (Müller et al., 2018) 804 times, meaning an average of 2.1 quotations per paper. Overall, 63 quotations from 51 papers contained some errors. In other words, 7.8% (63/804) of the quotations contained errors and they involved 13.3% (51/384) of the papers. The coded datasheet is provided as Supplementary File 1.
3.2. Data on the quotation level
Fig. 1 shows the frequency counts of quotations pertaining to the rules made by (Müller et al., 2018). Out of the 804 quotations, the three rules that the researchers most frequently referred were the power of the meta-analysis (Rule #2, 14.1%), the consistency of the search coverage and reference space (Rule #4, 13.8%), and the statistical threshold (Rule #8, 10.2%). Some quotations mentioned that a checklist was provided to the readers to indicate how the report conformed to the guidelines by (Müller et al., 2018). Since those quotations/lines did not directly address the ten simple rules, they were coded as “Checklist #99”. Besides, 36.8% of the quotations did not address any of the ten simple rules or provide a checklist, they were coded as “Others #0”. It should be noted that the summation of all frequency counts reached 805 instead of 804, as there was one quotation that referred to two rules simultaneously:
“Even though meta-analyses provide robust evidence and address significant issues as the effect sizes of findings and publication bias, we considered that the reviewed results are no suitable for that kind of approach because the selected neuroimaging data did not meet two important aspects to perform robust analysis (Müller et al., 2018): (i) same original search coverage across studies (e.g. MRI metrics and functional tasks, brain coverage or statistical correction methods); and (ii) sufficient number of studies/experiments.” Excerpt from (Pérez-García et al., 2022), with the quotation dealing with Rules #2 and #4.
Fig. 1.
Number of quotations pertaining to the rules made by (Müller et al., 2018).
Fig. 2 shows the frequency counts of each error class made by the quotations. Among the 63 quotations with errors, the majority belonged to major error (i.e., classes 1–3, 69.8%). Meanwhile, Table 2 listed some representative quotations for each error class. For class 1 errors, the commonest error was quoting (Müller et al., 2018) to substantiate the assertion of having at least 10 datasets to be considered to have adequate power for ES-SDM meta-analysis (n = 5; Müller et al. did not specify the number of experiments required for ES-SDM). Other class 1 errors included: (i) having a misquoted primary cluster-forming threshold of p < 0.05 or p < 0.005 (n = 2; it should be p < 0.001); (ii) excluding studies with < 7 or < 10 subjects (n = 2; Müller et al. did not mention such exclusion criterion based on sample size); (iii) conducting an ALE meta-analysis with 10 or 15 experiments (n = 2; Müller et al. explicitly suggested at least 17–20 as a rough recommendation, but not 10 or 15); and (iv) deciding on whether to conduct an ALE meta-analysis or not based on the number of foci collected (n = 2; it should be the number of experiments rather than the number of foci). Class 2 errors involved various quotations unrelated to the assertion and readers were referred to Supplementary File 1. For class 3 errors, two quotations claimed that the recommended statistical threshold for ES-SDM meta-analysis included SDM-Z < 1 or ≤ ± 1 (it should be > 1); and one quotation claimed that a statistical threshold of voxel-level uncorrected p < 0.001 was used to balance between sensitivity and susceptibility to false positives with a small number of studies included (it should be a cluster-level inference with FWE correction).
Fig. 2.
Number of quotations with errors.
Table 2.
Representative quotations for each error class.
| Error class | Quotation example | Rationale |
|---|---|---|
| 1 | “The dual threshold CBMA ALE was conducted with cluster-level threshold and family-wise error rate of p < 0.05, 1000 thresholding permutations, and intensity threshold of p < 0.05. The intensity threshold was chosen based on the recommendations for conducting neuroimaging meta-analyses [26].” From (Moring et al., 2022). | (Müller et al., 2018) mentioned that voxel-level cluster forming threshold should be p < 0.001, not < 0.05. |
| 2 | “The meta-analysis was carried out using the SDM software package (Seed-based d Mapping software, version 6.21 for 64-bit Windows PC) [13,14,15].” From (Mavroudis et al., 2022). | The SDM software was not introduced by (Müller et al., 2018). |
| 3 | “Third, statistical significance was determined using the recommended an uncorrected threshold of p = 0.005 with a cluster extent of 10 voxels and SDM-Z < 1 in current voxel-wise meta-analysis to control the false positive rate [12].” From (Jiang et al., 2020). | (Müller et al., 2018) recommended SDM-Z > 1 instead of < 1. |
| 4 | “Studies based on ROIs or SVC must be excluded because a prerequisite for fMRI meta-analyses is that convergence across experiments is tested against a null-hypothesis of random spatial associations across the entire brain, under the assumption that each voxel has the same a priori chance of being activated (Eickhoff et al., 2011, Müller et al., 2018).” From (Lo Presti et al., 2023). | (Müller et al., 2018) mentioned that “SVC analyses may be potentially included if peaks in the regions liberally thresholded are discarded unless they meet the statistical threshold used in the rest of the brain”. |
| 5 | No example |
3.3. Data on the paper level
Fig. 3 shows the frequency counts of papers with quotations pertaining to the rules made by (Müller et al., 2018). Out of the 384 papers, the three rules that the papers most frequently referred were, same as on the quotation level, the power of the meta-analysis (Rule #2, 24.5%), the consistency of the search coverage and reference space (Rule #4, 22.7%), and the statistical threshold (Rule #8, 18.8%).
Fig. 3.
Number of papers with quotations pertaining to the rules made by (Müller et al., 2018).
Among the 51 papers with quotation errors, major error occurred in 70.6% of them and minor errors occurred in 33.3%. Fig. 4 shows the frequency counts of each error class by the number of papers. Again, class 2 errors were most frequent (41.2% of 51).
Fig. 4.
Number of papers with errors.
4. Discussion
There is an old saying of “standing on the shoulders of giants”. By considering all the empirical evidence available, a neuroimaging meta-analysis guideline was published by (Müller et al., 2018) who summarized the recommendations into “ten simple rules”. This guideline has been published for 5 years, and it would be beneficial to the neuroimaging community by answering this research question: did researchers quote this reference correctly without misinterpretation or distorted meaning? In this study, it was found that a total of 384 papers have 804 quotations of (Müller et al., 2018). Among these, quotations errors were identified in 63 quotations from 51 papers, equivalent to 13.3% of analyzed papers and 7.8% of quotations. On the quotation level, the major and minor error rates were 69.8% and 30.2% respectively. These numbers seemed to be comparable to other biomedical fields such as radiology (77.8% vs 22.2%), head and neck surgery (65.4% vs 34.6%) and ophthalmology (60.0% vs 40.0%) (see the meta-analytic results from (Mogull, 2017)).
Since the ten simple rules may not be always applicable, sometimes hedging or toning down is needed when citing them. The most frequent reason for quoting (Müller et al., 2018) was related to the consideration of the power of a meta-analysis. In its Rule #2, the recommendation was to include “at least 17–20 experiments in ALE meta-analyses” to achieve sufficient power to detect small effects and ensure that results were not driven by single experiments. In particular, it was emphasized that this was only a “rough recommendation” as the number of experiments required strongly depended on the expected effect size. It should be noted that (Müller et al., 2018) did not mention that an ALE meta-analysis with <17–20 experiments would be invalid. Therefore, a hedging should be considered, for example, in a recent ALE meta-analysis that mentioned:
“The final meta-analysis included 20 experiments (201 foci, 399 participants) from 20 published fMRI studies. This surpasses the threshold number of experiments required to carry out a valid ALE meta-analysis (Eickhoff et al., 2016, Müller et al., 2018)”. From (Asano et al., 2022).
A similar issue was observed with the Rule #4 of (Müller et al., 2018). As reported in Table 2, although “SVC analyses should not be included in a meta-analysis” in general, the resulted foci could be included given that they could meet/survive the statistical threshold applied to the whole brain. Hence, it could be an oversimplification of the rule by stating that all SVC analyses must be excluded (or not to be included), such as:
“Studies based on ROIs or SVC must be excluded because a prerequisite for fMRI meta-analyses is that convergence across experiments is tested against a null-hypothesis of random spatial associations across the entire brain, under the assumption that each voxel has the same a priori chance of being activated (Eickhoff et al., 2011, Müller et al., 2018)”. From (Lo Presti et al., 2023).
And:
“According to the recent best-practice guidelines for conducting neuroimaging meta-analyses, a study can only be included in the CBMA when it reports its results based on the whole-brain analysis without small volume corrections (SVC) (Müller et al., 2018).” From (Sheng et al., 2021).
On the contrary, Rule #8 dealt with the statistical threshold of the meta-analysis that was relatively straightforward without much capacity for deviations. However, as the results indicated, some studies used a more liberal primary cluster-forming threshold while citing (Müller et al., 2018). Regardless of having a direct citation or not, prior surveys have indicated that cluster-level FWE correction has been the mainstream choice of statistical threshold for ALE meta-analysis since (Müller et al., 2018) was published (Yeung et al., 2023, Yeung et al., 2019).
One interesting and recurring major error was quoting (Müller et al., 2018) to substantiate the assertion that ES-SDM meta-analysis requires at least 10 datasets to be conducted. By examining the other references cited together by these papers, eventually it was identified that the following two papers from the ES-SDM developers were frequently co-cited together with (Müller et al., 2018): (Carlisi et al., 2017, Radua and Mataix-Cols, 2009). Whereas (Carlisi et al., 2017) indeed cited (Radua and Mataix-Cols, 2009) for this requirement about a minimum of 10 datasets, it seemed that the latter did not explicitly state such a recommendation.
Of course, researchers might simply follow the recommendations of (Müller et al., 2018) during manuscript writing without making any explicit quotations to it. Therefore, this study could not assess if the neuroscience literature, particularly the CBMA literature, gave enough credit to (Müller et al., 2018) whenever they followed the guidelines from the latter.
5. Conclusion
Based on this work, it was found that the three most frequently referred rules, out of the ten simple rules, were the power of the meta-analysis (Rule #2), the consistency of the search coverage and reference space (Rule #4), and the statistical threshold (Rule #8). Overall, 7.8% of the quotations were erroneous and they involved 13.3% (51/384) of the papers that cited (Müller et al., 2018). The commonest quotation errors were dealing with a failure to substantiate the assertion, unrelated to the assertion, and oversimplification of the original notion. Since the rules were accompanied by explanatory texts on the underlying assumptions and considerations instead of being completely unequivocal, it is recommended that future meta-analysis studies should provide more details on the methodological considerations on why certain rules are adhered or not adhered to, especially with regard to statistical concerns. For example, if (Müller et al., 2018) is cited to justify the decision to exclude particular SVC results, it should be confirmed that those SVC results could not meet the more stringent statistical threshold used in the rest of the brain; otherwise, they should actually be included. It is suboptimal to exclude all SVC results indifferently by saying “Müller and co-workers told me to do so as a rule of thumb”.
Funding
This work was supported by departmental funds only.
7. Data and code availability statement
Data used in this study is provided in the Supplementary File 1.
No code was used in this study.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Footnotes
Supplementary data to this article can be found online at https://doi.org/10.1016/j.nicl.2023.103496.
Appendix A. Supplementary data
The following are the Supplementary data to this article:
Data availability
Data is available in the Supplementary File
References
- Albajes-Eizagirre A., Solanes A., Vieta E., Radua J. Voxel-based meta-analysis via permutation of subject images (PSI): Theory and implementation for SDM. Neuroimage. 2019;186:174–184. doi: 10.1016/j.neuroimage.2018.10.077. [DOI] [PubMed] [Google Scholar]
- Asano, R., Lo, V., Brown, S. 2022. The Neural Basis of Tonal Processing in Music: An ALE Meta-Analysis. Music Sci. 5, https://doi.org/10.1177/20592043221109958.
- Button K.S., Ioannidis J.P., Mokrysz C., Nosek B.A., Flint J., Robinson E.S., Munafò M.R. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 2013;14:365–376. doi: 10.1038/nrn3475. [DOI] [PubMed] [Google Scholar]
- Carlisi C.O., Norman L.J., Lukito S.S., Radua J., Mataix-Cols D., Rubia K. Comparative multimodal meta-analysis of structural and functional brain abnormalities in autism spectrum disorder and obsessive-compulsive disorder. Biol. Psychiatry. 2017;82:83–102. doi: 10.1016/j.biopsych.2016.10.006. [DOI] [PubMed] [Google Scholar]
- Carp J. On the plurality of (methodological) worlds: estimating the analytic flexibility of FMRI experiments. Front. Neurosci. 2012;6:149. doi: 10.3389/fnins.2012.00149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carp J. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage. 2012;63:289–300. doi: 10.1016/j.neuroimage.2012.07.004. [DOI] [PubMed] [Google Scholar]
- Da Costa B.R., Cevallos M., Altman D.G., Rutjes A.W., Egger M. Uses and misuses of the STROBE statement: bibliographic study. BMJ Open. 2011;1:e000048. doi: 10.1136/bmjopen-2010-000048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eickhoff S.B., Bzdok D., Laird A.R., Roski C., Caspers S., Zilles K., Fox P.T. Co-activation patterns distinguish cortical modules, their connectivity and functional differentiation. Neuroimage. 2011;57:938–949. doi: 10.1016/j.neuroimage.2011.05.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eickhoff S.B., Nichols T.E., Laird A.R., Hoffstaedter F., Amunts K., Fox P.T., Bzdok D., Eickhoff C.R. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. Neuroimage. 2016;137:70–85. doi: 10.1016/j.neuroimage.2016.04.072. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eklund, A., Nichols, T.E., Knutsson, H. 2016. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc. Natl. Acad. Sci. 113, 7900-7905. [DOI] [PMC free article] [PubMed]
- Fox P.T., Parsons L.M., Lancaster J.L. Beyond the single study: function/location metanalysis in cognitive neuroimaging. Curr. Opin. Neurobiol. 1998;8:178–187. doi: 10.1016/s0959-4388(98)80138-4. [DOI] [PubMed] [Google Scholar]
- Fox P.T., Lancaster J.L., Laird A.R., Eickhoff S.B. Meta-analysis in human neuroimaging: computational modeling of large-scale databases. Annu. Rev. Neurosci. 2014;37:409–434. doi: 10.1146/annurev-neuro-062012-170320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang B., He D., Guo Z., Gao Z. Effect-size seed-based d mapping of resting-state fMRI for persistent insomnia disorder. Sleep Breath. 2020;24:653–659. doi: 10.1007/s11325-019-02001-3. [DOI] [PubMed] [Google Scholar]
- Laird A.R., Lancaster J.J., Fox P.T. Brainmap. Neuroinformatics. 2005;3:65–77. doi: 10.1385/ni:3:1:065. [DOI] [PubMed] [Google Scholar]
- Lo Presti S., Origlia S., Gianelli C., Canessa N. Cognition, body, and mind: A three-in-one coordinate-based fMRI meta-analysis on cognitive, physical, and meditative trainings. Hum. Brain Mapp. 2023;44:3795–3814. doi: 10.1002/hbm.26312. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mavroudis I., Chatzikonstantinou S., Ciobica A., Balmus I.-M., Iordache A., Kazis D., Chowdhury R., Luca A.-C. A Systematic Review and Meta-Analysis of the Grey Matter Volumetric Changes in Mild Traumatic Brain Injuries. Appl. Sci. 2022;12:9954. [Google Scholar]
- Mogull S.A., Smalheiser N.R. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate. PLoS One. 2017;12(9):e0184727. doi: 10.1371/journal.pone.0184727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moring J.C., Husain F.T., Gray J., Franklin C., Peterson A.L., Resick P.A., Garrett A., Esquivel C., Fox P.T. Invariant structural and functional brain regions associated with tinnitus: A meta-analysis. PLoS One. 2022;17:e0276140. doi: 10.1371/journal.pone.0276140. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Müller V.I., Cieslik E.C., Laird A.R., Fox P.T., Radua J., Mataix-Cols D., Tench C.R., Yarkoni T., Nichols T.E., Turkeltaub P.E. Ten simple rules for neuroimaging meta-analysis. Neurosci. Biobehav. Rev. 2018;84:151–161. doi: 10.1016/j.neubiorev.2017.11.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nord C.L., Valton V., Wood J., Roiser J.P. Power-up: a reanalysis of 'power failure' in neuroscience using mixture modeling. J. Neurosci. 2017;37:8051–8061. doi: 10.1523/JNEUROSCI.3592-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pérez-García J.M., Suárez-Suárez S., Doallo S., Cadaveira F. Effects of binge drinking during adolescence and emerging adulthood on the brain: A systematic review of neuroimaging studies. Neurosci. Biobehav. Rev. 2022;137 doi: 10.1016/j.neubiorev.2022.104637. [DOI] [PubMed] [Google Scholar]
- Poldrack R.A., Mumford J.A. Independence in ROI analysis: where is the voodoo? Soc. Cogn. Affect. Neurosci. 2009;4:208–213. doi: 10.1093/scan/nsp011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Radua J., Mataix-Cols D. Voxel-wise meta-analysis of grey matter changes in obsessive–compulsive disorder. Br. J. Psychiatry. 2009;195:393–402. doi: 10.1192/bjp.bp.108.055046. [DOI] [PubMed] [Google Scholar]
- Radua J., Mataix-Cols D., Phillips M.L., El-Hage W., Kronhaus D., Cardoner N., Surguladze S. A new meta-analytic method for neuroimaging studies that combines reported peak coordinates and statistical parametric maps. Eur. Psychiatry. 2012;27:605–611. doi: 10.1016/j.eurpsy.2011.04.001. [DOI] [PubMed] [Google Scholar]
- Sheng L., Ma H., Dai Z., Yao L., Hu J. Is first episode mania associated with grey matter abnormalities? We are not sure! Bipolar Disord. 2021;23:409–410. doi: 10.1111/bdi.13048. [DOI] [PubMed] [Google Scholar]
- Stang A., Jonas S., Poole C. Case study in major quotation errors: a critical commentary on the Newcastle-Ottawa scale. Eur. J. Epidemiol. 2018;33:1025–1031. doi: 10.1007/s10654-018-0443-3. [DOI] [PubMed] [Google Scholar]
- Turkeltaub P.E., Eden G.F., Jones K.M., Zeffiro T.A. Meta-analysis of the functional neuroanatomy of single-word reading: method and validation. Neuroimage. 2002;16(3):765–780. doi: 10.1006/nimg.2002.1131. [DOI] [PubMed] [Google Scholar]
- Vul E., Harris C., Winkielman P., Pashler H. Voodoo correlations in social neuroscience. Perspect. Psychol. Sci. 2009;4:274–290. doi: 10.1111/j.1745-6924.2009.01125.x. [DOI] [PubMed] [Google Scholar]
- Yeung A.W.K. Do neuroscience journals accept replications? A survey of literature. Front. Hum. Neurosci. 2017;11:468. doi: 10.3389/fnhum.2017.00468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeung A.W.K. Limited replication studies in functional magnetic resonance imaging research on taste and food. Curr. Sci. 2019;117:1345–1347. [Google Scholar]
- Yeung A.W.K., Wong N.S.M., Lau H., Eickhoff S.B. Human brain responses to gustatory and food stimuli: a meta-evaluation of neuroimaging meta-analyses. Neuroimage. 2019;202:116111. doi: 10.1016/j.neuroimage.2019.116111. [DOI] [PubMed] [Google Scholar]
- Yeung A.W.K., Robertson M., Uecker A., Fox P.T., Eickhoff S.B. Trends in the sample size, statistics, and contributions to the BrainMap database of activation likelihood estimation meta-analyses: An empirical study of 10-year data. Hum. Brain Mapp. 2023;44:1876–1887. doi: 10.1002/hbm.26177. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data used in this study is provided in the Supplementary File 1.
No code was used in this study.
Data is available in the Supplementary File




