Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Nov 1.
Published in final edited form as: Trends Cogn Sci. 2018 Mar 22;22(5):368–369. doi: 10.1016/j.tics.2018.02.009

Getting serious about variation: Lessons for clinical neuroscience

Alexander J Shackman 1,2,3,*, Andrew S Fox 4,5,*
PMCID: PMC5911206  NIHMSID: NIHMS948477  PMID: 29576465

LETTER

Psychiatric illness is a leading burden on global public health and the economy. Recent decades have witnessed the development of powerful new tools for quantifying variation in the genome and brain, leading to initial optimism that the miseries of psychopathology might soon be defined and diagnosed on the basis of objective biological assays. But as yet, new cures or other major breakthroughs have proven elusive. In psychiatric genetics, the first wave of small-scale studies proved difficult to replicate, leading some to question whether gene hunting should be abandoned altogether [1]. The resulting period of crisis and reflection ultimately motivated the widespread adoption of more rigorous analytic approaches and the development of mega-cohorts and consortia with the tens of thousands of subjects needed to reliably detect subtle gene-disease associations [1]. This second-wave research demonstrates that the amount of disease-relevant information captured by the vast majority of individual genetic variants (‘loci’) is vanishingly small (<1%) [1]. In short, while genomic approaches have proven valuable for discovering new molecular targets and validating existing therapeutics, they are not useful for routine screening or diagnosis. Whither neuroscience? In their thoughtful review, Holmes and Patrick (H&P) make it clear that that neuroimaging research faces a similar kind of crisis, and that there is no universal, unconditionally optimal pattern of brain structure or function [2]. These insights have enormous implications for a range of stakeholders, from researchers and clinicians to funding agencies and policy makers. Here, we highlight a few of their most important conclusions and expand briefly on their recommendations for increasing the impact of clinical neuroscience research.

Echoing other recent commentators [36], H&P remind us that, as neuroimaging samples have grown larger, providing more stable and precise estimates of brain-disease associations, it has become painfully clear that popular candidate neuromarkers of psychiatric disease and risk—amygdala activity and connectivity, striatal reactivity to reward, hippocampal volume, and so on—explain a statistically significant, but small amount of disease-relevant information (e.g., risk, status, treatment response, course). Like common genetic markers, they are far too small to be useful in the clinic; they simply lack the degree of sensitivity and specificity required to benefit individual patients [5].

So, where do we go from here? H&P make several useful suggestions, emphasizing that “it is unlikely that we will achieve a breakthrough in our understanding of how the brain’s intricate functions give rise to psychiatric illness by investigating a handful of candidate biomarkers at a time.” They highlight the utility of developing more complex multivariate approaches and the necessity of very large datasets (see Box 1 and the Supplementary References for detailed recommendations).

Box 1. Recommendations and Best Practices for Clinical Neuroscience Research.

  1. Dimensional Phenotypes. DSM-5 diagnoses present barriers to the development of useful neuromarkers (marked heterogeneity and co-morbidity, poor inter-rater reliability) [12]. Dimensional phenotypes (e.g., anxiety) offer substantial increases in reliability and power and opportunities to understand transdiagnostic mechanisms.

  2. Big and Broad Data. Mega-cohorts encompassing diverse measures provide increased power and more precise and realistic estimates of brain-disease associations. They afford opportunities for assessing generalizability, controlling potential confounds (e.g., motion), and identifying relevant environments.

  3. Aggregate. Machine learning approaches, risk calculators, and related techniques that aggregate multiple sources of neural (e.g., activity, connectivity, and anatomy) and non-neural information are more likely to yield clinically useful tools than isolated ‘hot spots’ of brain function or structure or focusing on the brain alone.

  4. Cross-Validate. Absent adequate cross-validation procedures, estimates of neuromarker performance are likely to be inflated. Separate cohorts for model training/discovery and testing/replication are the gold standard and serve as an important brake on premature application based on overly rosy preliminary results.

  5. Incremental Validity. Simple paper-and-pencil measures of personality and actuarial approaches that leverage readily available demographic/clinical information (e.g., smoking status) sometimes outperform more sophisticated and expensive neuromarkers. Absent head-to-head tests of ‘incremental validity,’ the clinical value of neuroimaging remains unknown.

  6. Reliability. In contrast to clinical measures, the test-retest reliability of most neuromarkers is unclear. Reliability is typically assessed in small, non-representative samples, precluding definitive conclusions. We urge investigators to assess and report reliability in larger, more diverse samples.

H&P also stress the need go beyond data that is simply Big. Marshalling a range of evidence, they emphasize that particular behavioral, psychological, and neurobiological traits (e.g., amygdala hyper-reactivity) are typically neither good nor bad. Most are associated with a combination of costs and benefits, with the ultimate consequences for health and disease conditional on the larger environment (e.g., presence of danger, exposure to stress or adversity) and neural context (e.g., integrity of countervailing regulatory systems) [see also 7, 8]. Making sense of this complexity, H&P tell us, requires datasets that are Broad, encompassing measures of “brain structure and function as well as diverse clinical, demographic, behavioral, genetic phenotypes.” Doing so promises to accelerate the development of evidence-based risk calculators and stratified treatment strategies that respect the diversity of human neurobiology [9].

One point not addressed in detail by H&P is the relationship between biomarker development and mechanistic research. Consider the case of genomics. Second-wave genetics research demonstrates that the small-but-reliable associations uncovered by large-scale studies can pinpoint biological pathways with substantial ‘phenotypic’ consequences when manipulated with drugs or other direct perturbations. This raises the possibility that while small-but-reliable associations between neuromarkers and disease are not clinically useful, they could still prove helpful for discovering and prioritizing targets for mechanistic work or for assessing the degree to which discoveries made in animal models translate to diverse human phenotypes and contexts [10]. Such work is likely to be especially valuable when it goes beyond ‘the usual suspects’ and identifies previously unknown or underexplored neuromarkers that are amenable to mechanistic exploration.

With new tools for understanding biological variation comes an opportunity and a responsibility to use them to improve the lives of patients. Developing better biomarkers is important for a variety of reasons, from screening and diagnosis to treatment stratification and monitoring [3]. Developing neuromarkers that more seriously reckon with the complex interactions of human brains, contexts, and outcomes would accelerate the development of new therapeutics and the re-purposing of existing ones [11]. H&P remind us that this is a major challenge, one that will require substantial time and resources, new kinds of multi-disciplinary collaborations and training models, and a sober assessment of what particular kinds of neuromarkers really can and cannot do for patients and clinicians.

Supplementary Material

Supplement

Acknowledgments

We acknowledge financial support from the California National Primate Center; National Institutes of Health (DA040717 and MH107444); University of California, Davis; and University of Maryland, College Park. We thank B. Roberts for insightful discussions and L. Friedman for assistance. Authors declare no conflicts of interest.

Footnotes

Supplement: Supplementary Information; 1 pdf file containing an Annotated Bibliography

References

  • 1.Sullivan PF, et al. Psychiatric genomics: An update and an agenda. Am J Psychiatry. 2018;175:15–27. doi: 10.1176/appi.ajp.2017.17030283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Holmes AJ, Patrick LM. The myth of optimality in clinical neuroscience. Trends in Cognitive Sciences. doi: 10.1016/j.tics.2017.12.006. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Woo CW, et al. Building better biomarkers: brain models in translational neuroimaging. Nat Neurosci. 2017;20:365–377. doi: 10.1038/nn.4478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kapur S, et al. Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Mol Psychiatry. 2012;17:1174–9. doi: 10.1038/mp.2012.105. [DOI] [PubMed] [Google Scholar]
  • 5.Savitz JB, et al. Clinical application of brain imaging for the diagnosis of mood disorders: the current state of play. Mol Psychiatry. 2013;18:528–39. doi: 10.1038/mp.2013.25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jollans L, Whelan R. The clinical added value of imaging: A perspective from outcome prediction. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2016;1:423–432. doi: 10.1016/j.bpsc.2016.04.005. [DOI] [PubMed] [Google Scholar]
  • 7.Kendler KS, Halberstadt LJ. The road not taken: life experiences in monozygotic twin pairs discordant for major depression. Mol Psychiatry. 2013;18:975–84. doi: 10.1038/mp.2012.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kendler KS. The dappled nature of causes of psychiatric illness: replacing the organic-functional/hardware-software dichotomy with empirically based pluralism. Mol Psychiatry. 2012;17:377–88. doi: 10.1038/mp.2011.182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Paulus MP. Pragmatism instead of mechanism: A call for impactful biological psychiatry. JAMA Psychiatry. 2015;72:631–2. doi: 10.1001/jamapsychiatry.2015.0497. [DOI] [PubMed] [Google Scholar]
  • 10.Fox AS, Shackman AJ. The central extended amygdala in fear and anxiety: Closing the gap between mechanistic and neuroimaging research. Neurosci Lett. doi: 10.1016/j.neulet.2017.11.056. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Wager TD, Woo CW. fMRI in analgesic drug discovery. Science Translational Medicine. 2015;7:274fs6. doi: 10.1126/scitranslmed.3010342. [DOI] [PubMed] [Google Scholar]
  • 12.Kotov R, et al. The hierarchical taxonomy of psychopathology (HiTOP): A dimensional alternative to traditional nosologies. J Abnorm Psychol. 2017;126:454–77. doi: 10.1037/abn0000258. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement

RESOURCES