Dear Editor,
Genetics in the early 2000s consisted primarily of studies in small samples from individual research centres. Following the successful initial identification of very rare genetic variants which cause large effects, the search continued for individual genes which might explain a substantial proportion of the phenotypic variance in the wider population. However, it soon became clear that such genes simply do not exist, and that nearly all conclusions of the latter studies were incorrect [1]. To rise to the challenge, the field collectively moved towards collaborative research, yielding multi-centre sample sizes of up to tens of thousands, and genetics is now widely considered to produce robust scientific results [2]. During recent years, researchers within several fields of neuroimaging research, particularly MRI and fMRI, have begun to make this transition to data sharing and collaborative research, facilitated by technical developments in data handling and analysis [3–5].
Just like genetics, clinical positron emission tomography (PET) research has provided answers to numerous research questions across several domains. For example, the dopamine transporter shows clear decreases in Parkinson’s disease, dopamine synthesis capacity is increased in schizophrenia, and mony brain neurotransmission proteins show distinct decreases across the lifespan. However, for the well-established tracers and targets, it can be argued that we have already picked most of the low-hanging fruit. In the continued quest to break new ground, it is likely that most of the studied effects will be small, which means that large sample sizes will be needed to reach the threshold of statistical significance [6]. If we instead continue to use small sample sizes to search for subtle true effects, we run the risk of fooling ourselves into seeing “patterns” in what is really noise, leading to reporting of spurious effects. Further, with small samples, even when we correctly identify true effects as significant, our effect size estimates will be biased upward [6, 7]. We acknowledge that there will always be an important role for small exploratory studies for generating new hypotheses, but the subsequent confirmation and quantitative description of these hypotheses simply requires higher standards of evidence to move the field forward. Importantly, the problem of unreliable findings is by no means restricted to PET research, as has recently been evidenced by “replication crises” in other fields such as psychology, economics, and preclinical drug discovery research [8–12].
Unfortunately, large samples in the field of PET are unattainable for many individual research centres, owing to the high cost and technical difficulty of the method [13]. Traditionally, the proposed remedy to the issue of small sample size studies has been to perform meta-analyses to gain an overall, field-wide estimate of the studied effect. However, traditional meta-analysis has its own set of limitations. If the individual studies entered into a meta-analysis consist of biased effect size estimates, then the overall effect size will also be misleading [14–16]. It is also not possible to control for confounders, or to account for differences in outcome measures between studies [17]. One solution is to instead make use of the original data points collected by individual research centres. In the previous issue of EJNMMI, we report the results of such a multi-centre collaboration, or “mega-analysis” (Tuisku et al. [18]). By synthesising translocator protein (TSPO)–binding data from three different centres, effects were shown for age, BMI, and sex on TSPO, some of which were not evident in previous studies using smaller samples. Apart from informing the design and interpretation of TSPO PET studies, the results may also open up new avenues of research into the biological role of TSPO.
Multi-centre collaboration and data sharing entail certain considerations. With more researchers working on the same problem, the risk for differences in opinion regarding outcome measures, statistical analyses, and even the nature of the hypotheses increases [19]. We have found it useful to formally make these decisions in advance. A Memorandum of Understanding (MoU) may serve as an initial step, containing rules regarding data handling, the general outline of the analysis, as well as author number and order. This document can then be complemented by a specific pre-registration protocol for the analysis, detailing how data will be synthesised, which hypotheses will be tested, which statistical models will be used to make inference, etc. [19, 20]. When all authors have come to an agreement on the content, the protocol can be locked and uploaded to a date-stamped public repository. In the ensuing analysis, deviations from the pre-registration are still possible, provided that they are reported in addition to the original protocol.
Importantly, sharing of individual participant outcome measures, such as binding values, is only the first step. By using data in as raw a form as possible, the data processing in the combined analysis can be made more homogeneous. This can be achieved by using either a centralised analysis, or by using reproducible, open-source tools for which all procedures are scripted and can be run in an identical fashion [21, 22]. Hence, in the case of PET studies, the sharing of time activity curves is better than sharing of binding outcomes, while raw image data is better still, allowing for homogeneous data analysis all the way from image processing to pharmacokinetic modelling [23]. An additional measure to minimise between-centre differences would then be to use harmonised protocols for data collection.
The sharing of raw image data has historically been challenging, as storage and processing of files can differ between, or even within research groups. These complications are effectively resolved by the recently developed Brain Imaging Data Structure (BIDS) [3]. BIDS consists of a set of standards for storing brain imaging data, such that preprocessing and analysis can be performed in a standardised fashion, further simplified by BIDS Apps [4]. Further, the OpenNeuro repository allows for open sharing of neuroimaging data according to the BIDS standard, and is already in wide use by the MRI, EEG, and MEG research communities. Today, there are also a limited number of PET measurements available on this platform (e.g. https://openneuro.org/datasets/ds001421/versions/1.0.1). At the NeuroReceptor Mapping conference in London 2018, a proposal to set up an open PET data sharing archive using the BIDS standard received unanimous support.
By sharing individual participant data, regulatory aspects regarding data integrity come into play. The principle for data sharing adopted by the EU commission is that of “as open as possible, as closed as necessary” [24]. Contrary to common belief, the General Data Protection Regulation (GDPR) is designed to facilitate sharing of research data and collaboration, provided that sufficient steps have been taken to perform de-identification. A full interpretation of the implications of this new legislation is currently underway for many research centres/countries, and at present local guidelines may differ. Either way, we encourage researchers to begin as early as possible to ask research participants for permission for open sharing of research data for ongoing and planned PET studies, in order to ensure that future legal obstacles can be minimised. Efforts are underway to assist researchers in this matter, by creating template forms for informed consent which comply with all regulatory statutes (https://open-brain-consent.readthedocs.io).
Within the PET brain imaging field, we are now at a crossroads. Will we continue to work solely within individual research centres, using small samples to yield incomplete, or even misleading, results from confirmatory studies; or will we make the transition to multi-centre collaboration and data sharing as exemplified by the genetics community? We hope for the latter, sooner rather than later, in order to ensure the continued success of PET research in driving our understanding of the biochemical basis of brain function and dysfunction.
Funding information
SC is supported by the Swedish Research Council (Grant no. 523-2014-3467)
Compliance with ethical standards
This article does not contain any studies with human participants or animals performed by any of the authors.
Conflict of interest
The authors declare they have no conflict of interest.
Footnotes
This article is part of the Topical Collection on Neurology
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Granville James Matheson and Pontus Plavén-Sigray contributed equally to this work.
References
- 1.Duncan Laramie E., Ostacher Michael, Ballon Jacob. How genome-wide association studies (GWAS) made traditional candidate gene studies obsolete. Neuropsychopharmacology. 2019;44(9):1518–1523. doi: 10.1038/s41386-019-0389-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Sullivan PF, Agrawal A, Bulik CM, Andreassen OA, Børglum AD, Breen G, Cichon S, Edenberg HJ, Faraone SV, Gelernter J, Mathews CA, Nievergelt CM, Smoller JW, O’Donovan MC. for the P.G. Psychiatric Genomics Consortium, Psychiatric genomics: an update and an agenda. Am J Psychiatry. 2018;175:15–27. doi: 10.1176/appi.ajp.2017.17030283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Gorgolewski KJ, Auer T, Calhoun VD, Craddock RC, Das S, Duff EP, Flandin G, Ghosh SS, Glatard T, Halchenko YO, Handwerker DA, Hanke M, Keator D, Li X, Michael Z, Maumet C, Nichols BN, Nichols TE, Pellman J, Poline J-B, Rokem A, Schaefer G, Sochat V, Triplett W, Turner JA, Varoquaux G, Poldrack RA. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data. 2016;3:160044. doi: 10.1038/sdata.2016.44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Gorgolewski KJ, Alfaro-Almagro F, Auer T, Bellec P, Capotă M, Chakravarty MM, Churchill NW, Cohen AL, Craddock RC, Devenyi GA, Eklund A, Esteban O, Flandin G, Ghosh SS, Guntupalli JS, Jenkinson M, Keshavan A, Kiar G, Liem F, Raamana PR, Raffelt D, Steele CJ, Quirion P-O, Smith RE, Strother SC, Varoquaux G, Wang Y, Yarkoni T, Poldrack RA. BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Comput Biol. 2017;13:e1005209. doi: 10.1371/journal.pcbi.1005209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Poldrack RA, Barch DM, Mitchell JP, Wager TD, Wagner AD, Devlin JT, Cumba C, Koyejo O, Milham MP. Toward open sharing of task-based fMRI data: the OpenfMRI project. Front Neuroinform. 2013;7:12. doi: 10.3389/fninf.2013.00012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Loken E, Gelman A. Measurement error and the replication crisis. Science. 2017;355:584–585. doi: 10.1126/science.aal3618. [DOI] [PubMed] [Google Scholar]
- 7.Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA. A manifesto for reproducible science. Nat Hum Behav. 2017;1:0021. doi: 10.1038/s41562-016-0021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712. doi: 10.1038/nrd3439-c1. [DOI] [PubMed] [Google Scholar]
- 9.McNutt M. Reproducibility. Science. 2014;343:229. doi: 10.1126/science.1250475. [DOI] [PubMed] [Google Scholar]
- 10.Begley CG, Ioannidis JPA. Reproducibility in science. Circ Res. 2015;116:116–126. doi: 10.1161/CIRCRESAHA.114.303819. [DOI] [PubMed] [Google Scholar]
- 11.Nosek BA, O. science Collaboration Estimating the reproducibility of psychological science. Science (80- ) 2015;349:aac4716. doi: 10.1126/SCIENCE.AAC4716. [DOI] [PubMed] [Google Scholar]
- 12.Camerer CF, Dreber A, Forsell E, Ho T-H, Huber J, Johannesson M, Kirchler M, Almenberg J, Altmejd A, Chan T, Heikensten E, Holzmeister F, Imai T, Isaksson S, Nave G, Pfeiffer T, Razen M, Wu H. Evaluating replicability of laboratory experiments in economics. Science. 2016;351:1433–1436. doi: 10.1126/science.aaf0918. [DOI] [PubMed] [Google Scholar]
- 13.Cumming P. PET neuroimaging: the white elephant packs his trunk? Neuroimage. 2014;84:1094–1100. doi: 10.1016/j.neuroimage.2013.08.020. [DOI] [PubMed] [Google Scholar]
- 14.Lakens D, Hilgard J, Staaks J. On the reproducibility of meta-analyses: six practical recommendations. BMC Psychol. 2016;4:24. doi: 10.1186/s40359-016-0126-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.van Elk M, Matzke D, Gronau QF, Guan M, Vandekerckhove J, Wagenmakers E-J. Meta-analyses are no substitute for registered replications: a skeptical perspective on religious priming. Front Psychol. 2015;6:1365. doi: 10.3389/fpsyg.2015.01365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Inzlicht M, Gervais W, Berkman E. Bias-correction techniques alone cannot determine whether ego depletion is different from zero: commentary on Carter, Kofler, Forster, & McCullough, 2015. SSRN Electron J. 2015. 10.2139/ssrn.2659409.
- 17.Plavén-Sigray P, Cervenka S. Meta-analytic studies of the glial cell marker TSPO in psychosis – a question of apples and pears? Psychol Med. n.d.. [DOI] [PMC free article] [PubMed]
- 18.Tuisku Jouni, Plavén-Sigray Pontus, Gaiser Edward C., Airas Laura, Al-Abdulrasul Haidar, Brück Anna, Carson Richard E., Chen Ming-Kai, Cosgrove Kelly P., Ekblad Laura, Esterlis Irina, Farde Lars, Forsberg Anton, Halldin Christer, Helin Semi, Kosek Eva, Lekander Mats, Lindgren Noora, Marjamäki Päivi, Rissanen Eero, Sucksdorff Marcus, Varrone Andrea, Collste Karin, Gallezot Jean-Dominique, Hillmer Ansel, Huang Yiyun, Höglund Caroline O., Johansson Jarkko, Jucaite Aurelija, Lampa Jon, Nabulsi Nabeel, Pittman Brian, Sandiego Christine M., Stenkrona Per, Rinne Juha, Matuskey David, Cervenka Simon. Effects of age, BMI and sex on the glial cell marker TSPO — a multicentre [11C]PBR28 HRRT PET study. European Journal of Nuclear Medicine and Molecular Imaging. 2019;46(11):2329–2338. doi: 10.1007/s00259-019-04403-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Ioannidis JPA. Why most published research findings are false. Get to Good Res Integr Biomed Sci. 2015;2:2–8. doi: 10.1371/journal.pmed.0020124. [DOI] [Google Scholar]
- 20.Plavén-Sigray P, Matheson GJ, Collste K, Ashok AH, Coughlin JM, Howes OD, Mizrahi R, Pomper MG, Rusjan P, Veronese M, Wang Y, Cervenka S. Positron emission tomography studies of the glial cell marker translocator protein in patients with psychosis: a meta-analysis using individual participant data. Biol Psychiatry. 2018;84:433–442. doi: 10.1016/j.biopsych.2018.02.1171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.T. Karjalainen, S. Santavirta, T. Kantonen, J. Tuisku, L. Tuominen, J. Hirvonen, J. Hietala, J. Rinne, L. Nummenmaa, MAGIA: robust automated modelling and image processing pipeline for PET neuroinformatics. BioRxiv. 2019; 604835. [DOI] [PMC free article] [PubMed]
- 22.G.J. Matheson, kinfitr: reproducible PET pharmacokinetic modelling in R. BioRxiv. 2019; 755751.
- 23.Nørgaard M, Ganz M, Svarer C, Feng L, Ichise M, Lanzenberger R, Lubberink M, Parsey RV, Politis M, Rabiner EA, Slifstein M, Sossi V, Suhara T, Talbot PS, Turkheimer F, Strother SC, Knudsen GM. Cerebral serotonin transporter measurements with [ 11 C]DASB: a review on acquisition and preprocessing across 21 PET centres. J Cereb Blood Flow Metab. 2019;39:210–222. doi: 10.1177/0271678X18770107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.H2020 Programme Guidelines on FAIR Data Management in Horizon 2020, 2016. http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf (accessed July 1, 2019).
