Skip to main content
IBRO Neuroscience Reports logoLink to IBRO Neuroscience Reports
. 2021 Dec 2;12:20–24. doi: 10.1016/j.ibneur.2021.11.003

The dissemination of brain imaging guidelines and recommendations

Andy Wai Kan Yeung a,, Pradeep Singh a, Simon B Eickhoff b,c
PMCID: PMC8666331  PMID: 34918005

Abstract

Many neuroimaging guidelines and recommendations have been published in the literature to guide fellow researchers to conduct and report research findings in a standardized manner. It was largely unknown if they were cited or read by the scientific community. Analyses were conducted to assess their impact in terms of citations, Twitter posts, and Mendeley reads. Web of Science Core Collection database was accessed to identify relevant publications. The number of their Twitter posts and Mendeley reads were recorded from Altmetric and Mendeley databases respectively. Spearman correlation tests were conducted to evaluate if the citation count had a relationship with these metrics. When all 1786 publications were considered, citation count had a strong positive correlation with Mendeley reads (rho = 0.602, p < 0.001), but a weak negative correlation with Twitter posts (rho = −0.085, p < 0.001). When publications in the 2010 s were specifically considered, citation count had an even stronger positive correlation with Mendeley reads (rho = 0.712, P < 0.001), whereas the correlation with Twitter posts became positive but still weak (rho = 0.072, P = 0.012). Temporal profiles of citation and Mendeley counts showed that these guidelines and recommendations had a relatively stable influence in the field for years after being published.

Keywords: Guideline, Citation, Twitter, Mendeley, Ten simple rules, Bibliometric

Highlights

  • 1786 publications providing neuroimaging guidelines and recommendations were analyzed.

  • Citation count had a strong positive correlation with Mendeley reads.

  • Citation count had a weak negative correlation with Twitter posts.

  • It had a weak positive correlation with Twitter posts for publications in the 2010s.

  • Temporal profiles of citation and Mendeley counts showed their stable influence.

1. Introduction

It often takes ample time for PhD students and junior researchers to learn the rules and master the skills to conduct and report research works adequately. From time to time, experts and key opinion leaders in various research fields published guidance papers to explain the recommended rules and the proper way to conduct and report research in a standardized manner. This form of guidance is of paramount importance for the neuroimaging community, as the study design, data processing, and reporting are very complicated and can be very flexible (Botvinik-Nezer et al., 2020, Carp, 2012, Carp, 2013).

When certain research findings or recommendations became frequently cited, they could become a field standard and fellow researchers would strictly adhere to them. For instance, it was reported that cerebral processing of taste was affected by handedness so that the dominating right-handed population would have different activations than left-handed population (Cerf et al., 1998, Faurion et al., 1999). Since then, nearly 90% of functional magnetic resonance imaging (fMRI) studies on taste processing recruited right-handed subjects only (Yeung et al., 2020). Therefore, the adherence to guidelines indeed had an impact to improve the quality of reporting. For example, meta-analyses explicitly mentioning the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guideline reported more complete details regarding multiple aspects, such as information sources, search strategy, study selection, and risk of bias (Leclercq et al., 2019).

The research question was: Did neuroimaging guidelines and recommendations disseminate well in the research community? Since these publications were intended to set up standards or guide fellow researchers in conducting research, they should be expected to reach the broadest audience as possible. Currently, there is no well-established criteria for determining if a guideline was disseminated well. However, it was suggested that dissemination efforts could be evaluated with quantitative and qualitative indicators (Ross-Hellauer et al., 2020). Examples with the former included formal citations, altmetrics, numbers of dissemination events and their number of attendees, and media coverage. Some of them might not be collected easily, such as the numbers of events held to disseminate the guidelines and recommendations. Meanwhile, brain imaging guidelines are technical and belong to specialist topics, so that they are unlikely to have broad media coverage. After these considerations, it was decided to examine dissemination in the research community with three metrics: formal citations in publications, mentions in social media platform (Twitter), and downloads/reads from reference manager/academic social network (Mendeley). We would evaluate these counts separately, and then evaluate if they were correlated to each other to demonstrate if they were similarly representative.

2. Materials and methods

The conventional way to assess the academic impact of publications was to examine their citation counts. In June 2020 we queried the electronic Web of Science Core Collection database (www.webofknowledge.com) with a series of search strings listed in Table 1. The search eventually yielded 1786 publications. The temporal citation data of the publications were recorded together with the number of authors, and number of pages. To track their number of times being mentioned in Twitter posts, their DOIs and/or PMIDs were loaded into Altmetric database (https://www.altmetric.com/explorer/login). Besides, to track how many times they were read by Mendeley users, their titles were entered into Mendeley (https://www.mendeley.com/search/) to manually search for their records.

Table 1.

The search strategy used to query Web of Science database to identify relevant publications.

Steps
Search strategy
#1:
Title/abstract/author keywords contain ("good practice*" OR "best practice*" OR guideline* OR recommend* OR "user* guide" OR "hitchhiker* guide" OR "simple rule*" OR "walkthough*" OR "tutorial*" OR "step-by-step")
#2A:
Title contains (neuroimaging OR fMRI OR "functional magnetic*" OR "functional MRI" OR "brain imaging" OR VBM OR "voxel-based morphometr*" OR "statistical parametric mapping" OR connectome* OR "diffusion tensor*" OR ("diffusion weight*" AND brain) OR DTI OR (DWI AND brain) OR ("positron emission*" AND brain) OR magnetoencephalogr* OR MEG OR EEG OR electroencephalogr* OR "event related potential*" OR ERP OR fNIRS OR "functional near-infrared spectroscop*" OR "functional NIRS" OR ("magnetic resonance spectroscopy" AND brain) OR (MRS AND brain)
#2B:
Web of Science category equals to ("Neuroimaging") AND Title/abstract/keywords contain (brain OR neur* OR cereb*)
#3:
(#1 AND #2A) OR (#1 AND #2B)
#4:
Title/abstract/keywords should not contain (PRISMA OR STROBE OR CONSORT OR NICE)
#5:
Document type equals to (Article OR letter OR note OR review OR editorial material OR book chapter) and publication language equals to (English)
#6:
Manual exclusion of irrelevant publications in the remaining literature set. For example, MEG might represent methyl glucoside and ERP might represent enterprise resource planning. Publications deemed irrelevant were hence removed during this step.

Shapiro-Wilk tests were performed to evaluate if the counts of citation, Twitter post, and Mendeley read had a normal distribution or not. All of them were not normally distributed (W = 0.112, 0.267, and 0.294 respectively, all P < 0.001). Spearman correlation tests were performed to evaluate if citation count was associated with Twitter and Mendeley counts for all papers and papers published since 2010, respectively. As an exploratory analysis, Mann-Whitney U test was performed to evaluate if publications in core journals were on average more cited than those in non-core journals. Core journals were defined as the journals that accounted for one-third of the publications by Bradford’s law (Bradford, 1934). Statistical analyses were conducted by SPSS 26.0 (IBM, New York, USA). Tests with P < 0.05 were considered statistically significant.

3. Results

The 1786 guidelines and recommendations were mainly published in the 2010 s (Fig. 1). In overall, the publications had a mean ± SD of 41.3 ± 220.0 citations, or 4.7 ± 16.1 citations per year. Meanwhile, 10.1% of them remained uncited at the time of writing this report. Across all decades, the publications had an average of 5.9 ± 5.8 authors and 10.7 ± 9.2 pages. Precisely, the mean number of authors per paper rose gradually across the decades (1980 s, 2.3 ± 5.0; 1990 s, 4.2 ± 2.9; 2000 s, 4.9 ± 3.3; 2010 s, 6.4 ± 6.5), whereas the mean number of pages per paper remained stable (1980 s, 11.6 ± 14.6; 1990 s, 9.8 ± 14.3; 2000 s, 10.1 ± 13.5; 2010 s, 10.9 ± 6.2).

Fig. 1.

Fig. 1

Number of guidelines and recommendations published in each decade and their ratio of uncitedness.

The attention gained by the publications per decade in terms of citations, Twitter posts, and Mendeley reads are listed in Table 2. Mean twitter posts per publication was highest for the 2010 s whereas mean citations and Mendeley reads per publication was highest for the 2000 s, implying that the impact in terms of social media dissemination occurred more quickly in this Internet age. When all publications were considered, citation count had a strong positive correlation with Mendeley reads (rho = 0.602, p < 0.001), but a weak negative correlation with Twitter posts (rho = −0.085, p < 0.001). Since Twitter and Mendeley were relatively new online platforms that became active in the 2010 s (especially Twitter, see Table 2), publications in the 2010 s were specifically considered. Citation count had an even stronger positive correlation with Mendeley reads (rho = 0.712, P < 0.001), whereas the correlation with Twitter posts became positive but still weak (rho = 0.072, P = 0.012) (Fig. 2).

Table 2.

Total counts of citation, Twitter posts, and Mendeley reads of the publications published per decade.

Decade Citation Twitter post Mendeley read
1980s 13.8 ± 27.1 0 16.1 ± 49.7
1990s 47.8 ± 88.3 0.0 ± 0.1 21.8 ± 39.4
2000s 94.3 ± 442.3 0.4 ± 2.2 128.4 ± 403.1
2010s 23.8 ± 66.3 9.3 ± 28.8 76.0 ± 138.5

The figures are mean ± SD.

Fig. 2.

Fig. 2

Scatter plots of citation count against (A) Twitter, and (B) Mendeley counts for publications in the 2010s.

The temporal profiles of citations, Twitter posts, and Mendeley reads are illustrated by Fig. 3. The profiles were totally different between the three parameters. The mean annual citation rose sharply during the first 3 years, and then flattened and plateaued at 6–7 citations per year until year 16. The mean Twitter posts was over 8 in the first year, and then dropped to the bottom to < 1 post per year afterwards. Meanwhile, the mean Mendeley reads remained stable within the range of 15–20 reads per year for the first 9 years.

Fig. 3.

Fig. 3

Temporal profiles of annual (A) citations, (B) Twitter posts, and (C) Mendeley reads. Please note that all publications were included in the computing the citation profile, whereas publications since year 2011 were included in the Twitter and Mendeley profiles, as the first Twitter post for the analyzed literature set was made in 2011.

There were 8 core journals that accounted for one-third of the publications (N = 606), namely NeuroImage, American Journal of Neuroradiology, Journal of Neurointerventional Surgery, Neuroradiology, Journal of Clinical Neurophysiology, Human Brain Mapping, Journal of Neuroimaging, Stereotactic and Functional Neurosurgery. Publications from these core journals were significantly cited more than those from non-core journals (median = 20.0 vs 12.0, P < 0.001).

For illustrative purposes, Table 3 lists some examples of titles with recurring phrases among the analyzed publication set, besides the explicit terms of guidelines and recommendations. It could be observed that papers using identical keywords in the title did not warrant the same amount of attention.

Table 3.

Examples of publications sharing recurring phrases.

Title Citations Twitter posts Mendeley reads
Ten simple rules (N = 5)
Ten simple rules for dynamic causal modeling 419 8 1709
Ten simple rules for reporting voxel-based morphometry studies 166 3 730
Ten simple rules for neuroimaging meta-analysis 48 141 201
Understanding DCM: Ten simple rules for the clinician 37 38 478
Ten simple rules for predictive modeling of individual differences in neuroimaging 6 166 159
Step-by-step or N-step (N = 6)
Topographic ERP analyses: A step-by-step tutorial review 514 1 1000
FMRI retinotopic mapping - Step by step 127 0 572
A 12-step user guide for analyzing voxel-wise gray matter asymmetries in statistical parametric mapping (SPM) 47 18 287
Step-by-Step: The Effects of Physical Practice on the Neural Correlates of Locomotion Imagery Revealed by fMRI 21 0 63
Dynamic spatiotemporal brain analyses using high-performance electrical neuroimaging, Part II: A step-by-step tutorial 7 2 76
A step-by-step tutorial on using the cognitive architecture ACT-R in combination with fMRI data 5 2 60
Good / best practice (N = 17)
Good practice for conducting and reporting MEG research 245 13 1066
Best practices in data analysis and sharing in neuroimaging using MRI 97 199 803
The infant EEG mu rhythm: Methodological considerations and best practices 46 4 141
Electroencephalographic neurofeedback: Level of evidence in mental and brain disorders and suggestions for good clinical practice 26 14 279
Best practices for repeated measures ANOVAs of ERP data: Reference, regional channels, and robust ANOVAs 19 2 133
A procedural framework for good imaging practice in pharmacological fMRI studies applied to drug development #2: protocol optimization and best practices 16 0 70
A procedural framework for good imaging practice in pharmacological fMRI studies applied to drug development #1: processes and requirements 15 0 55
Best practice for single-trial detection of event-related potentials: Application to brain-computer interfaces 11 7 98
Best Current Practice for Obtaining High Quality EEG Data During Simultaneous fMRI 11 3 89
Good practices in EEG-MRI: The utility of retrospective synchronization and PCA for the removal of MRI gradient artefacts 11 0 76
Good practice in food-related neuroimaging 5 50 48
Data Citation in Neuroimaging: Proposed Best Practices for Data Identification and Attribution 5 18 47
Neuroimaging Epigenetics: Challenges and Recommendations for Best Practices 4 5 30
Preventing Skin Breakdown in EEG Patients: Best Practice Techniques 4 0 14
A Reproducible MEG/EEG Group Study With the MNE Software: Recommendations, Quality Assessments, and Good Practices 2 10 57
Practical guidelines for clinical magnetoencephalography - Another step towards best practice 1 1 15
Neuroimaging in dementia: how best to use the guidelines? 1 0 5

4. Discussion

Based on our analyzed literature set, citation count was found to be strongly and positively correlated to the number of Mendeley reads, whereas a much weaker correlation existed for the number of Twitter posts that could be either positive or negative depending on the dataset utilized. A previous analysis of nearly 400,000 biomedical papers concluded that the number of Mendeley reads had a stronger positive correlation to citations than the number of Tweets (Haustein et al., 2014). The correlation between citation count and Mendeley read was moderate to strong in eight academic fields spreading from arts to science (Thelwall, 2018), and for particular journals such as Nature and Science (Li et al., 2012). The current study showed that their findings was applicable to neuroimaging guidelines and recommendations. In particular, the magnitude of the Spearman correlation reported here, 0.602–0.712, was comparable, if not superior, to other medical and healthcare science fields such as biological psychiatry, pharmacy, and molecular medicine (Thelwall, 2017). Surely, Mendeley reads could only provide an educated but not precise estimate on the extent of dissemination of the neuroimaging guidelines and recommendations. Interested neuroimagers may read the articles in either soft or hard copy downloaded from various online depositories (e.g. ResearchGate or Arxiv) or distributed by fellows during lab meetings, workshops, and conferences. Though Mendeley readers could register an article and thus contribute to one “read” without really reading it, previous research reported that this was seldom (Mohammadi et al., 2016). Instead of passively tracking the number of reads, researchers could consider maximizing the accessibility/visibility of the guidelines and recommendations by publishing them in open access, so that in one click readers could access the full text. That might further increase the chance of being cited.

On the contrary, social media platforms are usually used to spread novel and ground-breaking findings that are eye-catching. Previous studies consistently reported that guidelines seldom made into the top 100 publications with the highest Altmetric Attention Score, as in the case of neuroimaging (1%) (Kim et al., 2019a), neurointervention (4%) (Kim et al., 2019b), and medical imaging (8%) (Moon et al., 2020). In particular, few Twitter feeds are focused on neuroscience communication and there is ample room for potential growth (Illes et al., 2010). Besides, the relationship between the number of Twitter posts and citations was found to be mixed and largely depending on the subject and publication year (Xia et al., 2016). Unlike biology sciences, physical and chemical sciences did not always demonstrate a significant positive correlation; for biology sciences, the magnitude of positive correlation varied from 0.153 to 0.366 (Spearman’s rho) (Xia et al., 2016). We speculated that perhaps Twitter has only become a major tool for dissemination for some scientists over the last few years, so its effect on citations has yet to be firmly established.

Guidelines and recommendations published in core journals were more cited. This implied a “snowball effect” of both publication and citation attention. Therefore, field experts who curate such documents that can target a broad spectrum of the neuroimaging field should consider publishing them in core journals to maximize visibility. At the same time, readers should understand that guidelines with low citations are not trivial or unimportant, but they may have a narrower spectrum. At first it could be counter-intuitive to see that papers published in the distant past had a higher ratio of remaining uncited. We speculated that it could be due to “PubMed effect” (Huh, 2013), especially that researchers since the mid-1990 s may have gradually developed a habit of searching PubMed to identify and thus cite relevant papers, compared to the distant past when researchers relied on reading the respective journals in printed format. This might partially explain the finding that so many publications from the 1980 s remained uncited. Another potential reason could be that Web of Science could have missing data about citations made in the distant past. Finally, the general growth of the neuroscience literature over the last decades has been enormous in terms of both publication and citation counts (Yeung et al., 2017a, Yeung et al., 2017b), which might also explain the reduced ratio of uncited papers published in recent years.

This study had some limitations. First, Altmetric provides data on mentions in multiple channels besides Twitter, such as news outlets, Facebook, etc. Future studies may further explore the association of these metrics. Meanwhile, the views and citation downloads through other sources such as Google Scholar, ResearchGate and paid reference manager software such as EndNote were not considered. Readers should be aware that Mendeley is only one of the sources where researchers search for relevant papers.

The key message here, perhaps, is that guidelines and recommendations in neuroimaging were fairly read and cited by fellow researchers in general. A huge variation existed in terms of counts of citation, Twitter post and Mendeley read. Currently there is no simple way to search the literature and then filter for guidelines and recommendations, as the literature databases such as Web of Science and Scopus do not include these as an option when choosing for document types. PubMed does tag relevant publications with “guidelines” in the field of MeSH terms. However, it was known that such tagging was not always accurate (Yeung, 2019). Meanwhile, the Organization for Human Brain Mapping (OHBM) is encouraging newly developed best practices to be published in its Aperture Journal, but the latter is relatively new at the time of writing this manuscript and has yet to gain traction. Therefore, it will be a good idea to put in more efforts to actively disseminate guidelines and recommendations to a broader audience. In terms of disseminating neuroscientific findings, arguments were put forward such as to actively stem the flow of misconceptions, inform public policy, as well as to gather public and private support and funding to accelerate the development of the field (Eagleman, 2013, Heagerty, 2015, van Atteveldt et al., 2014).

5. Conclusions

There were many guidelines and recommendations in the neuroimaging literature. They varied in citations and one-tenth of them were uncited. The number of Mendeley reads were positively correlated to the citation count, whereas Twitter count had a very weak relationship with citation count. Dissemination through Mendeley could perhaps be an effective way to reach a broader audience of fellow neuroimagers so that the field could be more standardized.

Ethics approval

Not applicable.

Funding

This work was supported by departmental funds only.

CRediT authorship contribution statement

AY: Conceptualization, Data curation, Formal analysis, Writing – original draft. PS: Data curation, Formal analysis, Writing – original draft. SB: Conceptualization, Writing – review & editing.

Conflicts of Interest

The author declared no conflict of interest.

Code availability

Not applicable.

Data availability

Data are available from Web of Science, Altmetric, and Mendeley databases, respectively.

References

  1. Botvinik-Nezer R., Holzmeister F., Camerer C.F., Dreber A., Huber J., Johannesson M., Kirchler M., Iwanir R., Mumford J.A., Adcock R.A. Variability in the analysis of a single neuroimaging dataset by many teams. Nature. 2020;582:84–88. doi: 10.1038/s41586-020-2314-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bradford S.C. Sources of information on specific subjects. Engineering. 1934;137:85–86. [Google Scholar]
  3. Carp J. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage. 2012;63:289–300. doi: 10.1016/j.neuroimage.2012.07.004. [DOI] [PubMed] [Google Scholar]
  4. Carp J. Better living through transparency: improving the reproducibility of fMRI results through comprehensive methods reporting. Cogn. Affect. Behav. Neurosci. 2013;13:660–666. doi: 10.3758/s13415-013-0188-0. [DOI] [PubMed] [Google Scholar]
  5. Cerf B., Lebihan D., Van de Moortele P., Mac Leod P., Faurion A. Functional lateralization of human gustatory cortex related to handedness disclosed by fMRI study. Ann. N. Y. Acad. Sci. 1998;855:575–578. doi: 10.1111/j.1749-6632.1998.tb10627.x. [DOI] [PubMed] [Google Scholar]
  6. Eagleman D.M. Why public dissemination of science matters: a manifesto. J. Neurosci. 2013;33:12147–12149. doi: 10.1523/JNEUROSCI.2556-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Faurion A., Cerf B., Van De Moortele P.-F., Lobel E., Mac Leod, P., Le Bihan D. Human taste cortical areas studied with functional magnetic resonance imaging: evidence of functional lateralization related to handedness. Neurosci. Lett. 1999;277:189–192. doi: 10.1016/s0304-3940(99)00881-2. [DOI] [PubMed] [Google Scholar]
  8. Haustein S., Larivière V., Thelwall M., Amyot D., Peters I. Tweets vs. Mendeley readers: how do these two social media metrics differ? IT Inf. Technol. 2014;56:207–215. [Google Scholar]
  9. Heagerty B. Dissemination does not equal public engagement. J. Neurosci. 2015;35:4483–4486. doi: 10.1523/JNEUROSCI.4408-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Huh S. Citation analysis of the Korean journal of urology from Web of science, Scopus, Korean medical citation index, KoreaMed synapse, and Google scholar. Korean J. Urol. 2013;54:220–228. doi: 10.4111/kju.2013.54.4.220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Illes J., Moser M.A., McCormick J.B., Racine E., Blakeslee S., Caplan A., Hayden E.C., Ingram J., Lohwater T., McKnight P. Neurotalk: improving the communication of neuroscience research. Nat. Rev. Neurosci. 2010;11:61–69. doi: 10.1038/nrn2773. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Kim E.S., Yoon D.Y., Kim H.J., Lee K., Kim Y., Bae J.S., Lee J.-H. The most mentioned neuroimaging articles in online media: a bibliometric analysis of the top 100 articles with the highest Altmetric Attention Scores. Acta Radiol. 2019;60:1680–1686. doi: 10.1177/0284185119843226. [DOI] [PubMed] [Google Scholar]
  13. Kim H.J., Yoon D.Y., Kim E.S., Yun E.J., Jeon H.J., Lee J.Y., Cho B.-M. The most mentioned neurointervention articles in online media: a bibliometric analysis of the top 101 articles with the highest altmetric attention scores. J. Neurointerv. Surg. 2019;11:528–532. doi: 10.1136/neurintsurg-2018-014368. [DOI] [PubMed] [Google Scholar]
  14. Leclercq V., Beaudart C., Ajamieh S., Rabenda V., Tirelli E., Bruyère O. Meta-analyses indexed in PsycINFO had a better completeness of reporting when they mention PRISMA. J. Clin. Epidemiol. 2019;115:46–54. doi: 10.1016/j.jclinepi.2019.06.014. [DOI] [PubMed] [Google Scholar]
  15. Li X., Thelwall M., Giustini D. Validating online reference managers for scholarly impact measurement. Scientometrics. 2012;91:461–471. [Google Scholar]
  16. Mohammadi E., Thelwall M., Kousha K. Can M endeley bookmarks reflect readership? A survey of user motivations. J. Assoc. Inf. Sci. Technol. 2016;67:1198–1209. [Google Scholar]
  17. Moon J.Y., Yun E.J., Yoon D.Y., Seo Y.L., Cho Y.K., Lim K.J., Hong J.H. Analysis of the altmetric top 100 articles with the highest altmetric attention scores in medical imaging journals. Jpn. J. Radiol. 2020;38:630–635. doi: 10.1007/s11604-020-00946-0. [DOI] [PubMed] [Google Scholar]
  18. Ross-Hellauer T., Tennant J.P., Banelytė V., Gorogh E., Luzi D., Kraker P., Pisacane L., Ruggieri R., Sifacaki E., Vignoli M. Ten simple rules for innovative dissemination of research. PLoS Comput. Biol. 2020;16 doi: 10.1371/journal.pcbi.1007704. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Thelwall M. Why do papers have many Mendeley readers but few Scopus-indexed citations and vice versa? J. Librariansh. Inf. Sci. 2017;49:144–151. [Google Scholar]
  20. Thelwall M. Early Mendeley readers correlate with later citation counts. Scientometrics. 2018;115:1231–1240. [Google Scholar]
  21. van Atteveldt N.M., van Aalderen-Smeets S.I., Jacobi C., Ruigrok N. Media reporting of neuroscience depends on timing, topic and newspaper type. PLoS One. 2014;9 doi: 10.1371/journal.pone.0104780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Xia F., Su X., Wang W., Zhang C., Ning Z., Lee I. Bibliographic analysis of nature based on Twitter and Facebook altmetrics data. PLoS One. 2016;11 doi: 10.1371/journal.pone.0165997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Yeung A.W.K. Comparison between Scopus, Web of Science, PubMed and publishers for mislabelled review papers. Curr. Sci. 2019;116:1909–1914. [Google Scholar]
  24. Yeung A.W.K., Goto T.K., Leung W.K. A bibliometric review of research trends in neuroimaging. Curr. Sci. 2017;112:725–734. [Google Scholar]
  25. Yeung A.W.K., Goto T.K., Leung W.K. The changing landscape of neuroscience research, 2006–2015: a bibliometric study. Front. Neurosci. 2017;11:120. doi: 10.3389/fnins.2017.00120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Yeung A.W.K., Wong N.S.M., Eickhoff S.B. Empirical assessment of changing sample‐characteristics in task‐fMRI over two decades: an example from gustatory and food studies. Hum. Brain Mapp. 2020;41:2460–2473. doi: 10.1002/hbm.24957. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data are available from Web of Science, Altmetric, and Mendeley databases, respectively.


Articles from IBRO Neuroscience Reports are provided here courtesy of Elsevier

RESOURCES