Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Feb 1.
Published in final edited form as: Trends Cogn Sci. 2017 Nov 25;22(2):111–123. doi: 10.1016/j.tics.2017.11.003

Predicting violent behavior: What can neuroscience add?

Russell A Poldrack 1, John Monahan 2, Peter B Imrey 3, Valerie Reyna 4, Marcus Raichle 5, David Faigman 6, Joshua W Buckholtz 7
PMCID: PMC5794654  NIHMSID: NIHMS922817  PMID: 29183655

Abstract

The ability to accurately predict violence and other forms of serious antisocial behavior would provide important societal benefits, and there is substantial enthusiasm for the potential predictive accuracy of neuroimaging techniques. We review the current status of violence prediction using actuarial and clinical methods, and assess the current state of neuroprediction. We then outline a number of questions that need to be addressed by future studies of neuroprediction if neuroimaging and other neuroscientific markers are to be successfully translated into public policy

Keywords: neuroimaging, violence, crime, predictive modeling, machine learning

The utility of violence prediction

Each year, the United States loses nearly 3.2 trillion dollars to crime [1], with violent crime responsible for the majority of these costs [2]. This figure includes victim-specific losses such as opportunity costs and lost productivity, but the costs of treatment and incarceration for offenders–which are borne by all citizens - are no less staggering. Violence (and other forms of serious antisocial behavior) is not a normally distributed “trait;” rather, a relatively small subset of individuals is responsible for the vast majority of violent crime. The ability to prospectively identify those predisposed to commit violent criminal behavior would be of great benefit in guiding decisions regarding bail, sentencing, probation/parole, court-ordered treatment, and civil commitment. At the level of policy, valid measures of individualized risk for future violence would be immensely useful for targeting prevention and treatment-related spending to maximize its benefit [3].

While behavioral prediction has a long - and somewhat fraught - history in the realm of law and policy, recent advances in brain imaging have renewed interest in the potential to accurately predict violent behavior. In particular, new insights into the neurobiological architecture of violence and antisocial behavior [46] have generated substantial excitement about the potential utility of neuroscientific methods for predicting future violent behavior (what some have termed “neuroprediction”) [7,8]. The purpose of this paper is to critically analyze the current state of neuroprediction in law, in order to guide future research and aid policy-makers as they consider whether and how to apply such research. To that end, we will first briefly summarize the history of behavioral and genetic prediction in law, and then review the current state of neuroprediction efforts. We then review the challenges inherent in any attempt to predict future behavior. We discuss several statistical and methodological hurdles to valid predictive inference, which are general to any domain of prediction, and outline best practices for future studies of neuroprediction.

Note that predictive models for binary outcomes (such as criminal recidivism) assess risks -- i.e. probabilities within groups -- rather than forecasting individual outcomes. Nonetheless, it is useful to administratively classify persons based on whether their group’s risk exceeds a threshold. Policies based on such classification, even using mediocre models, can often virtually guarantee aggregate gains, and thus that a plurality of those affected will benefit from the policy.

Behavioral prediction of future violent behavior

Courts are tasked with both retrospective and prospective functions. Decisions about bail, probation, and sentence length all reflect the fact that courts have a duty to prevent future criminal acts in addition to determining guilt and assigning punishment for crimes that have already been committed. Using behavioral risk factors to predict who will commit crime has been an important component of American criminal law since the rise of probation and parole in the late 19th-century, and a major concern of the mental health system since the shift from “need for treatment” to “dangerousness” as the legal standard for civil commitment in the 1970s [9]. A milestone in the scientific literature was the distinction made by Meehl [10] between “clinical” prediction (in which predictions are based on a clinician’s subjective judgments about a particular case) and “actuarial” (or statistical) approaches to behavioral prediction (which rely upon statistical models to predict relevant outcomes; see Box 1 for a detailed discussion of the general concept of “prediction”). Across nearly every domain tested, actuarial approaches have been shown to outperform clinical prediction, to such an extent that Meehl [11]stated that “There is no controversy in social science which shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one.” In the context of violent behavior, a representative review of behavioral prediction [12] concluded that “one area in which the statistical method is most clearly superior to the clinical approach is the prediction of violence.” That is, in head-to-head contests between predictions that reflect the subjective judgments of human experts and predictions based on validated statistical models, the statistical models have nearly always won.

Box 1. What does “prediction” mean?

The concept of “prediction” is fraught with misconception, in part due to the multiple overlapping ways the term is used in scientific discourse. In common language, prediction generally implies the anticipation of a future outcome based on information available in the present. This is our primary meaning of the term in the present context, as it would be in most legal contexts. Such predictions are made without knowledge of the outcome, using information that temporally precedes that outcome. This usage is, however, different from that often used in the context of statistical modeling, where the term “predicted value” may be used to describe the output of a fitted regression model obtained for some specific values of the model’s input variables. This latter usage is indifferent to the relative timing of the input and outcome variables, and also to whether the specific data point being used to generate the “prediction” was also used in fitting the statistical model.

The fields of statistics and machine learning have developed sophisticated frameworks for fitting models in ways that that allow quantification and optimization of the ability to accurately predict outcomes for new samples, which we take to be the sine qua non of “prediction”[7376]. A fundamental motivation for more recent developments is an appreciation of the pervasiveness of “overfitting”. Whenever a model is fit to a specific dataset, the fitted parameters reflect both the signal in the data as well as the noise in that dataset; this occurs even for simple models such as strictly linear regression on a single predictor, but becomes increasingly problematic as the model becomes more complex. An implication of this is that models will nearly always fit the dataset used to develop them better than any other dataset [77], and thus that assessment of model goodness of fit using the same data that were used to estimate the model parameters will almost necessarily understate the model error when applied to new data. This phenomenon is known as “shrinkage”[77].; Although there are methods for estimating and correcting for shrinkage using standard parametric statistics, a more common approach arising from the field of machine learning is to perform “cross-validation”, in which the model is iteratively fit using subsets of a dataset (known as “training sets”) and then used to predict the values of the held-out observations ( “test sets” or “validation sets”). These approaches allow one to estimate the model’s performance on data that played no role in the fitting of the model, which reduces the optimistic bias in predictive accuracy that arises from overfitting when the same data are used to train and test the model. Although cross-validation can provide valid estimates of out-of-sample generalization, it is not a panacea, and can be easily misused in ways that can still inflate one’s estimates of predictive accuracy [78].

In recent years, many behavioral instruments aimed at predicting crime have been published that are not adequately characterized by a simple clinical-actuarial dichotomy. Rather, the behavioral risk assessment process now exists on a continuum of rule-based structure, with completely unstructured (“clinical”) assessment occupying one pole of the continuum, completely structured (“actuarial”) assessment occupying the other pole, and several forms of partially-structured assessment lying in between [13]. While there is general agreement that structured behavioral approaches are more predictively valid than unstructured ones, how completely structured behavioral approaches compare with partially-structured behavioral approaches (often termed “structured professional judgment”) is unresolved. One meta-analysis of 28 studies found that the predictive validities of nine completely or partially-structured prediction instruments were essentially “interchangeable” [14]. One explanation for why different violence prediction instruments yield comparable accuracy is that they measure shared risk dimensions. In a memorable demonstration of this point[15], the items from several instruments were printed on paper strips, placed in a coffee can, shaken, and randomly redistributed to create new prediction instruments. The “coffee can instruments” predicted violent offenses as well as the originals from which the items came.

On the whole, this body of work shows that specific behavioral and trait dimensions can be used to predict future violence with some degree of accuracy. However, even the best predictive models are far from perfect [14]. Indeed, more recent meta-analytic work has quantified the positive and negative predictive value of a range of risk assessment devices [16], showing that these measures have a positive likelihood ratio (pLR: the ratio of the likelihood of a positive prediction for those who do to those who don’t subsequently offend) ranging from about 3.5 to 8. For comparison to medical diagnosis, the pLR for right lower abdominal pain in appendicitis is 8.4 [17]. Moreover, despite its clear superiority to clinical prediction, actuarial prediction has encountered significant pushback, with objections to so-called “moneyball sentencing” based on behavioral risk factors (e.g., employment, education) made on constitutional, ethical, and theoretical grounds [9,18,19]. These challenges, in concert with the explosion of interest in cognitive neuroscience over the last decade, have led some to focus on the use of biologically-based variables as predictive markers. A substantial body of previous work has examined the use of genetic prediction (see Box 2), but here we focus on the use of direct measurements of brain activity through neuroimaging. For a more general discussion of the use of neuroscientific and genetic evidence in the courtroom, see [20]

Box 2. Lessons from the Genetics of Antisocial Behavior and Violence.

Data from twin and family studies show that antisocial behavioral generally, and violence specifically, is moderately heritable. Genetic factors appear to account for 40–60% of the population-level variance in broad-band antisocial phenotypes[79], and heritability is considerably higher (>80%) for subtypes encompassing both antisocial behavior and callous-unemotional traits[80,81]. A number of risk-associated genetic variants have been identified[82], but considerable attention has been focused on one specific polymorphism in the MAOA gene (encoding the enzyme monoamine oxidase A). MAOA first came to prominence in series of family-based studies of severely violent men, who were found to possess a mutation that blocked production of this enzyme[83,84]. Complimentary work subsequently demonstrated that genetic deletion or hypoexpression of maoa markedly increased impulsive and aggressive behavior in mice[8587].

While mutations with effects as large as those described above are rare, a common functional polymorphism exists in the MAOA gene that modulates expression of the MAOA enzyme in the brain [88]. Significant associations with the low-expressing allele (MAOA-L) of this variant have been reported in adult substance abusers[8993], adolescents with behavioral problems[83,84,94], and high levels of antisocial traits[95,96][97]. These findings are largely consistent with a large body of work in across species that links dysregulation in serotonin function during ontogeny to impulsive-aggressive behavior[98].

The research cited above has led to the introduction of genetic testing for MAOA variants in several recent murder cases in the U.S. and Europe, in which putative positive results were introduced as mitigating evidence at sentencing, sometimes successfully. This is concerning because numerous studies have failed to replicate the association between MAOA variants and antisocial/violent behavior [99105], in line with field-wide concerns about the replicability of candidate gene approaches[106]. Moreover, even in studies reporting significant associations the effect sizes tend to be small. A recent paper[107] is notable for being an exception to this general rule, concluding that 5–10% of all severe crime in Finland could be attributable to genetic variants in MAOA and another gene (CDH13). However, their use of an extreme groups approach in a very small sample suggests an overestimation of the true amount of variance explained by these two common variants.

Given that effect sizes for individual genetic variants are so low, possessing a “risk” allele provides little predictive information about whether a given individual is more or less likely to commit violence. Roughly 40% of the population carries the low-expressing MAOA allele (in individuals of European descent; allele frequencies vary by ancestry), yet it is estimated that <0.01% of the U.S. population are arrested each year for violent crime1. In other words, most people who possess the MAOA risk allele never go on to commit acts of violence, suggesting that predictions of violence based on MAOA variant status would likely lead to a very high rate of false positives. Similar conclusions can be made for other previously proposed candidate genetic markers for criminal activity, such as the 5-HTTLPR variant[98].

Enter: Neuroprediction

The advent of neuroimaging has marked a paradigm shift in understanding the biological basis of human behavior. In the last quarter century, neuroscientists have witnessed the explosive development of new tools for measuring brain function, structure, chemistry, and connectivity. The promise of opening the black box of the human mind and wresting that which is most personal - beliefs, motivations, intentions, and capacities- from our brains has also generated substantial excitement amongst the general public. Each new neuroscientific advance is greeted with a chorus of speculation about its application to the real world, and nowhere is the level of anticipation higher than in the courts, because law - more than almost any other profession - faces daily the challenge of rendering judgements based on the contents of that black box. Given lingering skepticism about actuarial prediction, the rapid pace of discovery in neuroscience - coupled with the assumption among non-specialists that biological variables are somehow more “real” than demographic or psychological ones - has generated a great deal of enthusiasm for the use of brain imaging measures to predict violence. Enthusiasm for neuroprediction is due, in part, to the presumed closer biological proximity of brain-based measures to the causal processes that produce violent behavior, which makes it reasonable to assume that measures of brain activity might support better predictions. Indeed, neuroimaging techniques have great potential, but there are a number of concerns about their application to complex real-world behaviors such as violence, which we outline in the following sections.

Whether courts will be receptive to such biological risk factors is uncertain [21]; see Box 3 for a further discussion of the use of scientific data in legal decision making about individual cases, known as the G2i (group to individual) problem. The United States Supreme Court recently overturned a death penalty because an expert witness had testified that race was “know[n] to predict future dangerousness.” The Court stated that the case “is a disturbing departure from a basic premise of our criminal justice system: Our law punishes people for what they do, not who they are” (Buck v. Davis, 137 S. Ct. 759, 2017). Whether neuroscience markers will be considered attributes of “who a person is” in this sense is unclear.

Box 3. Making individual predictions from group data: The G2I problem.

The application of scientific research to legal settings raises a problem known as the “group to individual” (G2i) problem [108111], which has its roots in a key difference between the goals of science and the legal system. Science is focused on characterizing generalizable phenomena to establish mechanistic explanations that apply within definable population groups and hence are generalizable to other members of those populations (who may not yet have been observed). As a means to this end, most scientific work aims to describe observations about groups of individuals and/or collections of circumstances. By contrast, law is concerned with making concrete and definitive determinations about particular individuals and circumstances [112]. Thus, in science, individuals are generally incidental to the general insights they support, while in law the individual is paramount: group or population-level scientific data are only relevant to the extent that the data bolster or weaken the evidence provided in an individual case. Unfortunately, however, observations about groups only rarely apply universally to their individual members, such that group-level findings may provide only very weak support for individual determinations. The G2i problem is relevant for any scientific domain in which there is variability across individuals, including but not limited to neuroscientific measures.

Consider the following example. A neuroscientist uses fMRI to scan 100 participants who are instructed to either lie or tell the truth about a set of facts. Contrasting brain activity during lying with truth-telling reveals statistically significant activation in dorsolateral prefrontal cortex (DLPFC). This permits the valid group-level inference that lying is associated with DLPFC engagement. However, examination of each individual’s data reveals that while most subjects exhibited higher DLPFC activity during lying, some participants showed no difference and still others demonstrated lower DLPFC activity during lying compared to truth-telling. In other words, “heightened DLPFC activity accompanies lying” may be a valid group-level inference, but the application of this inference to any one individual invites serious and profoundly consequential risk of both false positives and false negatives [113]. Legal testimony in a hearing regarding admissibility of fMRI lie detection in a federal Medicare fraud case provides an example of the type of quandary that can arise. The scientist who was hired to administer fMRI lie detection in support of the defendant’s innocence testified that the test was valid evidence of the defendant’s general veracity, but refused to testify that the test confirmed the truthfulness of his answer to any specific question at issue in the case. For this and other reasons, the testimony was not admitted in evidence (US vs. Semrau, 07-10074 Ml/P). In this case, the “individual” was a specific item rather than an individual person, demonstrating the broad reach of the G2i problem.

The complex causal structure of real-world behavior

Whereas cognitive neuroscience has taught us a great deal about the neural bases of basic cognitive, emotional, and social functions, we still know almost nothing about how trait-like aspects of brain function, structure, chemistry or connectivity interact with social context and other dynamic environmental factors to determine real-world behaviors such as violence. What we can safely infer from recent work in other domains is that the causal structure of these behaviors will be highly complex and multiply determined. With regard to genetics, the last two decades of genome-wide association studies (GWAS) have shown that any particular common genetic variant (e.g. those appearing in at least 1% of the population) is unlikely to account for large effects on common phenotypes. For example, GWAS have been used to identify replicable associations between common genetic variants and behavioral phenotypes (such as educational attainment [22]) and mental illness (e.g. schizophrenia [23]). However, each of these individual associations accounts for a vanishingly small amount of variance in the phenotype (usually less than 1%); it is only through the aggregation of large numbers of variants that one can start to account for the heritability of these behaviors. The large effects estimated in earlier candidate-gene studies were likely due to the combination of small samples (which are only powered to find large effects) and publication bias[24].

On analogy to genetic data, it seems likely that any one measure of brain function, structure, chemistry and connectivity will account for only a small amount of variance in violent behavior. Further, while the neurobiological basis of inter-individual variability is coming into increasing focus, we still know relatively little about how individual brains change over time (intra-individual variability) [25]. Likewise, the relevance of lab-based measures of self-control (to take one relevant example) for predicting individual variability in real-world behavior remains largely unclear. These latter two issues are especially germane to the problem of violence prediction because violent behavior typically results from the interaction of trait-like vulnerabilities in capacities related to self-control and emotion regulation[6] with time-varying state factors (e.g. stress level and sleep deprivation) and punctate eliciting events (e.g. provocation). It is difficult to reconcile the static nature of lab-based assessments that might be used for prediction with the dynamic nature of real-world violence. In addition, violence is a multidimensional construct, encompassing both reactive subtypes (reflecting poor inhibitory control and emotion regulation) and goal-directed subtypes (reflecting maladaptive action valuation)[26,27]. As these two facets of violence likely reflect distinct neural mechanisms[27], it would seem unlikely that a single measure (such as the commonly used go/no-go task) could assess risk for both. On balance, it would seem safe to conjecture that accurate neuroprediction, if possible, will require aggregation of neuroscientific data across multiple cognitive tasks and multiple measurement techniques.

Neuroprediction: Where do we stand?

Insights into the neurobiology of violence can be gleaned from an emerging body of brain imaging work in clinical populations characterized by high levels of violent behavior, especially people with psychopathy. Psychopathy is a particularly useful model for understanding the neurobiology of violence because it encompasses both affective-interpersonal symptoms thought to underlie goal-directed violence and impulsive-antisocial symptoms linked to reactive violence. In addition, it appears to be the psychological trait most predictive of violent behavior[28]. It may thus serve as an effective intermediate phenotype (or “endophenotype”[29]) for the study of violent behavior.

Structural and functional fMRI results converge to suggest that maladaptive behavior in psychopathy - including violence - may arise from dysfunction within cortico-limbic and cortico-striatal circuitry involved in affective arousal, emotion regulation, and value-based decision-making. For instance, psychopathic individuals exhibit decreased amygdala and vmPFC gray matter volume, as well as lower vmPFC cortical thickness [30,31]. Likewise, psychopaths show reduced recruitment of amygdala and vmPFC during fear conditioning and moral decision making [3235], blunted amygdala responsiveness during affective perspective-taking [36,37], and weaker vmPFC engagement in response to empathogenic [38] and facial emotion stimuli [39]. Reduced functional and structural connectivity between amygdala and vmPFC has also been reported in psychopathy [36]. There is some evidence that the observed relationships between psychopathy, task-related brain activity and ventromedial prefrontal cortex--amygdala connectivity are driven by the affective-interpersonal features of the disorder [35,3942]. On the whole, this work is consistent with the notion that the socio-emotional deficits of psychopaths may arise from cortico-limbic circuit dysfunction [43,44].

While cortico-limbic dysfunction appears to play an especially prominent role in the affective-interpersonal dimension of psychopathy, recent work highlights the importance of cortico-striatal dysfunction for impulsive-antisocial symptoms in the disorder. Amphetamine-induced dopamine release and reward anticipation-related activity within the nucleus accumbens (NAcc) have been shown to be elevated in individuals with high levels of impulsive-antisocial traits[41,4547]. Likewise, several groups have found evidence for increased striatal gray matter volume in psychopathic offenders, particularly those with a history of impulsive violence[48,49]. Notably, impulsive-antisocial behavior has also been linked to prefrontal dysfunction during tasks of inhibitory control[27]. The combination of diminished prefrontal activity and heightened striatal responsiveness has led some to suggest that social norm transgressions, such as violence, could arise from impaired prefrontal modulation of striatal value representations[6]. The studies above comprise a potentially important empirical foundation for considering the neuroprediction of violence (but see [50] for an important discussion of the differences between causation and prediction). It should however be noted that most of these studies have not yet been replicated and, due to the difficulties associated with enrolling criminal offenders, involved relatively small sample sizes. Recent work by Kiehl and colleagues using mobile fMRI in comparatively large samples of currently incarcerated offenders [51] represents a notable exception, and offers a useful example of how present limitations in this area might be overcome (see Box 4 for further discussion of this work).

Box 4. Neuroprediction of future criminal behavior.

One study [51] has directly examined the predictive utility of neuroimaging data for future criminal acts. Ninety-six adult offenders (incarcerated for either violent or nonviolent crimes) were tested prior to release on a go/no-go task using fMRI, and the relation between task-related activity in the anterior cingulate cortex (ACC) and felony rearrests over up to four years (median 34.5 months) of follow-up was examined. The estimated probabilities of rearrest for those with ACC responses below and above the sample median were 60% and 46%, respectively. Rearrests for violent crimes were too rare in the sample to support a separate analysis, so the study’s results have only limited direct bearing on violence prediction, but the report raises questions that will similarly apply to future studies of neuroprediction.

The study is an impressive logistic accomplishment. A mobile MRI scanning facility was used on site at multiple correctional facilities, with well-established psychological testing and fMRI image-acquisition protocols. Target and control ROIs were prespecified, with seed coordinates chosen on the basis of peak BOLD activity in a comparably-sized non-offender sample. While rearrest outcomes did not evidently influence data extraction or statistical modeling choices, this paper and all others of similar nature would benefit from inclusion of explicit statements that the analyses were blinded to outcomes of interest, and formal pre-registration of the analysis plans.

A further analysis reported a survival model, but did not directly quantify prediction accuracy. This was addressed in a follow-up paper [114] using a receiver operating characteristic (ROC) analysis [115]. This paper also used bootstrap resampling to estimate and correct for shrinkage (overoptimism bias), which reduced estimated ROC areas from 68% and 76% (for all arrests and nonviolent arrests respectively) to the more conservative values of 63% and 69%, values described as “modest.” There are, however, two limitations. The bootstrap correction of overoptimism bias relies primarily on in-sample prediction, and thus on average tends to somewhat undercorrect for bias (compared to crossvalidation methods). More importantly, the approach was applied only to the full model, and thus does not provide overoptimism-corrected estimates relevant to the primary question of interest, which is the incremental contributions of ACC to predictive accuracy.. Repeated crossvalidation of the incremental ACC contributions to predictive accuracy would thus have increased the utility of this report. More generally, inclusion of overoptimism bias-corrected estimates of the incremental contributions of neuroimaging or other neuroscience-based markers would be similarly important in future reports of this nature.

Finally, wide scientific acceptance of any neuroimaging predictor of violence will require true validation by a combination of replications across a set of separate cohorts varying widely in sociodemographic, psychopathological, and criminological characteristics.

Finally, the body of work highlighted above does not speak to underlying brain circuit dysfunction underlying violence per se. Rather, these studies focus largely on psychopathy, which is certainly associated with violence but not to the degree that it could be considered a proxy. As one example, while amygdala dysfunction in psychopathy has been replicated by multiple groups, we know of no work to date that examines the specificity of amygdala dysfunction for violent behavior. Without such work, it is impossible to know whether amygdala dysfunction in psychopathy is responsible for the higher rates of violence in psychopathic individuals, versus - as one example - their need for stimulation or boredom proneness. Research in this area would be considerably advanced by a stronger focus symptom specificity, with the goal of mapping brain circuit dysfunction to specific sets of behaviors (e.g. violence and aggression) rather than categorical disorders[52].

The well-established association between psychopathy and violent behavior, in concert with our advancing understanding of the neurobiology of psychopathy, hint that it might be possible to predict future criminal behavior from neuroimaging data. However well motivated such an endeavor may be, we note that any particular neuroimaging signature of psychopathy need not, in principle, predict violence any better – or even as well as – related clinical or behavioral measures. Thus, studies must address several open questions before such candidate neuroimaging signatures for violent behavior can be seriously entertained (see Outstanding Questions). There have been remarkably few studies that have directly examined the relationship between neuroscientific variables and violence. One remarkable example is discussed in detail in Box 4, where we examine the application of the foregoing principles to the landmark study by Aharoni and colleagues [51].

Outstanding Questions Box.

  • Is the protocol for eliciting the signature from a future individual explicit enough and robust enough for effective replication in a range of other settings?

  • Were the components of the protocol for each subject (regions of interest, masks, normalizations, measures of hemodynamic response and BOLD contrast) completely uninfluenced by, and preferably blinded to, that subject’s criminal behavior outcomes?

  • Are the statistical measures used to report prediction benefit supported by methodological studies in the prediction analysis literature, and are these measures reported with error ranges accounting for measurement and sampling variability?

  • Do reported measures of predictive benefit include measures of incremental predictive value, relative to prediction from other non-neuroimaging measures (which are likely to be less onerous and/or expensive to ascertain)?

  • Were these measures and their error ranges obtained from a) direct estimation from the entire derivation sample, b) bootstrap validation or c) cross-validation from the derivation sample, d) from a test sample withheld from the model-fitting process, and/or e) from one or more entirely external validation samples?

  • Does the sample have appropriate demographic representation and does it access the full distribution of relevant behaviors? For example, if an incarcerated population is used, does it sample the full distribution of criminal phenotypes (e.g. never violent, occasionally violent, frequently violent?). If stratified sampling is used, is the variable employed for stratifying psychometrically valid and amenable to independent corroboration (e.g. by department of correction records, charging documents, etc.)? If a non-incarcerated comparison group is used for inference, is this group matched along relevant demographics (e.g. substance use, SES, race, education, reading ability)?

Neuroprediction in other domains

Beyond the realm of violence, there are several examples of studies that have effectively used neuroimaging for prediction of behavioral outcomes, an area reviewed recently [53]. One particular domain that has seen recent success is the prediction of treatment outcomes in psychiatric disorders. While many studies have examined associations between neuroimaging signals and treatment outcomes, only recently have appropriate predictive modeling tools been applied to properly assess predictive validity. One study [54] examined the relation between brain responses to angry versus neutral faces and response to cognitive-behavioral therapy in individuals with social anxiety disorder. A model including both brain responses as well as variables reflecting drug treatment group and disease severity was able to account for 41% of the variance in symptom change, versus 12% for a model that only included the severity and drug variables. In another example, [55] investigators examined whether the interaction between early life stress and amygdala response to fearful or happy faces could be used to predict the response to antidepressant medications in depressed individuals. Using cross validation, they showed that the prediction model using these variables was substantially better than one that did not include fMRI variables; the cross-validated sensitivity of the best model was 0.84, and the specificity was 0.69, suggesting that the technique could potentially provide useful information to physicians. Each of these studies was relatively small and will thus require further replication and validation, but they suggest that there is potential for neuroimaging in the prediction of behavioral outcomes and treatment response. We would also propose that the principles and best practices raised in the present paper would be useful as guideposts for neuroprediction regardless of the specific domain of prediction, and thus could be useful in assessing these studies of psychiatric neuroprediction as well.

Challenges for neuroprediction

There are a number of potential challenges that will make the practical application of neuroprediction difficult, particularly with regard to the prediction of future violent acts.

Foremost are the selection effects that are likely to occur due to the specific requirements of the imaging process. Participation in an fMRI study requires a relatively compliant individual who is willing to enter the scanner and behave as instructed. Thus, oppositional or defiant individuals are unlikely to be successfully imaged, and individuals with a chaotic lifestyle are likely to have trouble keeping their appointment for imaging. Even for compliant individuals, there are selection effects that may occur in relation to impulsivity. Highly impulsive individuals are less able to remain still in an MRI scanner for long periods of time [56], such that even if they are successfully imaged, their data may be corrupted or have lower signal-to-noise in comparison to non-impulsive individuals. In practice, it may be very difficult to disentangle reduced task activation from increased noise due to head motion.

A second challenge comes from the potential for intentional countermeasures. Once predictive models are developed, it is likely that strategies could be developed to help individuals appear less dangerous. Some obvious strategies include intentional head movement or breath holding, both of which would induce signal changes substantially larger than the small (1–5%) changes induced by task activation. More subtle evasive strategies could rely upon cognitive subversion. For example, a subject in the go/no-go task might be coached to try to withhold his or her responses on go trials; such subterfuge could potentially be detected (e.g. through analysis of the behavioral data) but nonetheless would leave neuroprediction methods open to questions of reliability and sensitivity, and detecting these countermeasures is likely to be more difficult than detecting countermeasures on psychological tests. Recent work in lie detection [57] and memory detection [58] has shown that such cognitive countermeasures can substantially reduce the accuracy of fMRI classification of mental states.

A third challenge relates to the degree to which observed neuroscientific predictors may be confounded with other variables that are actually supporting predictive validity. An outstanding example of this was seen in the ADHD-200 competition to generate diagnostic classifiers for attention deficit/hyperactivity disorder based on neuroimaging data[59]. An initial neuroimaging dataset with 776 individuals (491 healthy controls and 285 individuals diagnosed with ADHD) was released for use in model development; later, an unlabeled test dataset comprising 197 individuals was released, and competitors were asked to submit their predictions for each individual (ADHD vs. control, and further diagnosis of ADHD subtypes). Many of the competitors were able to generate predictive models with above-chance accuracy on the basis of the neuroimaging data, but the best performing model did not actually use the imaging data at all --- it simply used the demographic data (age, sex, handedness, and IQ), which allowed accurate prediction of ADHD status because of sex and IQ differences between the subject groups[60]. Without large enough samples to pull out such potential confounds, the interpretation of neuroimaging results in terms of mechanism will be very challenging.

A final challenge comes in assessing the generalizability of reported predictive accuracy to new samples. There is increasing concern about the degree to which results reported in the literature may overestimate the true sizes of reported effects [24]. In comparison to behavioral models, neuroimaging data have much higher dimensionality and much greater analytic flexibility [cf. 61], and it is known that this flexibility can grossly inflate Type I error rates [62,63]. There is also growing concern that the use of machine learning methods with small samples can result in highly inflated predictive accuracies[64]). Without an explicit a priori analysis plan, it is not possible to assess the degree to which any particular result may reflect data-driven analysis choices (either intentional or unintentional). One solution to this problem is to encourage replication of any particular result, as is common in genetic association studies [65] and is an emerging practice in psychology. However, for very precious samples such as the one collected by Aharoni et al. [51], this may not be practical. A useful alternative in such a situation would be pre-study registration for any research study that is meant to influence public policy, similar to the approach currently used for clinical trials. While not a cure-all, a greater emphasis on external validity and reporting of out-of-sample prediction measures would help to improve robustness of published study results.

Best practices for neuroprediction

It is certain that future research will continue to assess the ability of neuroscience methods to predict violent behavior, and we hope that such research will ultimately prove to be effective, given the pressing need for more effective prediction of future violence. Here we outline a set of principles that will help maximize the effectiveness and robustness of future studies. We would also note that most of these principles are not unique to neuroprediction, but, in principle, should apply to any new predictive method.

Pre-registration

The credibility of clinical trials in medicine has been greatly enhanced by the requirement for registration of study designs, hypotheses, and outcome measures prior to undertaking a study. In particular, the natural experiment occasioned by increased requirements for clinical trial registration in 2000 has shown that positive outcomes are more likely and estimated treatment effects are significantly larger for unregistered than for registered studies [66,67]. This is particularly important for fMRI given the immense degree of analytic flexibility in fMRI analysis [68]. There is a growing consensus (shared by most of the present authors) that pre-study registration of designs, hypotheses, analysis plans, and outcome measures could greatly increase both the reliability and acceptance of results from neuroimaging studies.

Validation

It is essential that all predictive outcomes be validated using an independent sample. The incorporation of repeated cross-validation and/or bootstrap validation in the process of model selection, and in reporting of model performance using a discovery cohort, can certainly reduce overoptimism bias and naïve applications based on early results (see Box 1). But while these methods can be useful for assessing the best model for a particular dataset, they also can be biased by preprocessing and multiple, iterative model reassessments. The gold standard should be a completely separate validation dataset that is kept aside until final testing of the hypotheses, using the models developed from the training dataset. Another alternative is for different research groups to separately test a specific hypothesis in independent studies, which are then reported in a single manuscript. This proposal is inspired by the now-standard requirement for replication in genetic association studies[69], which has motivated consortia to collaborate on papers that include replication across multiple samples. Yet another provocative alternative is the use of “blind analysis”[70], in which the researcher analyzing the data is blinded to some aspect of the data (e.g. through shuffling of variable labels).

It is also essential that neuropredictive methods are compared against the state of the art in behavioral prediction methods. It is common in neuroimaging studies to find that the predictive accuracy of imaging data is well above chance, but that the marginal improvement of prediction for neuroimaging compared to behavioral prediction is miniscule. Given the substantial added expense of neuroimaging compared to actuarial prediction, it is important to establish that neuropredictive methods improve prediction sufficiently to overcome the relative cost. This is, of course, the same issue as commonly occurs in medical science, where disease biomarkers considered promising when studied in isolation are then found to be redundant with less expensive predictors in routine workups. One counterpoint to this maxim is when neuroimaging provides greater mechanistic insights into the nature of prediction; for example, it could be the case that neuroimaging and behavioral measures are equally predictive of behavior, but the neuroimaging data provide additional guidance regarding the most effective therapeutic means for preventing further violence in each individual.

Appropriate norms

The development of norms for actuarial methods of violence prediction has required very large sample sizes in order to ensure that the predictions are accurate across a wide range of demographics. For example, recent studies that have validated the Oxford Risk of Recidivism tool (OxRec) [71] and the Oxford Mental Illness and Violence tool (OxMIV)[72] had samples sizes of 47,326 and 75,158 respectively. For neuroimaging prediction to be equally reliable, we will need norming datasets large enough to provide accurate predictions for individuals who vary in many different ways. Without such norms, the criteria for prediction will vary due to differences in the sociodemographic and clinical compositions of the samples from which they are derived, potentially increasing controversy in the interpretation and acceptance of individual risk assessments.

Concluding Remarks

The development of accurate methods to predict future violent behavior using behavioral, genetic, and/or neuroscientific data could have a significant impact on the legal system, especially on sentencing as well as prevention and treatment. Deeper and more mechanistic understanding of violent behavior—with objective techniques—has the potential to reduce suffering of victims, decrease the enormous economic burdens of crime, and minimize foregone futures of young people whose life trajectories could have been altered but for their involvement in impulsive crimes. Neuroprediction offers the potential to identify causal mechanisms that distinguish the callous psychopath from the neurologically immature or dysfunctional individual who might benefit from treatment or preventive measures. Despite the potential, however, current techniques fall far short of this ideal of objective mechanistic prediction. As we have discussed, the limited studies that have been published do not seem to approach what would be required to make definitive judgments in a legal context, or even to meet the legal standards for admissibility of expert testimony (see Outstanding Questions). As research accumulates, the uncertainty regarding accuracy of individual predictions may diminish, but the societal impact of research on neuroprediction will depend on future commitments to the application of rigorous methodology. Whether neuroprediction will ever reach its hypothetical potential, transcending the role of circumstances in human behavior to warrant serious impact in legal settings, remains an open question.

Trends Box.

  • Violent behavior is a costly large-scale societal problem

  • There is growing interest in using neuroscience data to assess risk for future violent behavior, but the utility of neuroscience for violence risk assessment remains to be established

  • We review what is currently known about the underlying neurobiological mechanisms of violence, and evaluate recent neuroprediction efforts.

  • Finally, we outline a set of practices for enhancing the validity and reliability of future risk assessment based on neuroscientific measures.

Acknowledgments

Preparation of this article was supported in part by the National Institutes of Health (National Institute of Nursing Research) under award number RO1NR014368-01 and in part by a grant from the John D. and Catherine T. MacArthur Foundation to Vanderbilt University. Its contents reflect the views of the authors, and do not necessarily represent the official views of either the John D. and Catherine T. MacArthur Foundation or the MacArthur Foundation Research Network on Law and Neuroscience (www.lawneuro.org). Thanks to Sadev Parikh for helpful comments on an earlier draft.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Anderson DA. The cost of crime. Foundations and Trends® in Microeconomics. 2012;7:209–265. [Google Scholar]
  • 2.McCollister KE, et al. The cost of crime to society: new crime-specific estimates for policy and program evaluation. Drug Alcohol Depend. 2010;108:98–109. doi: 10.1016/j.drugalcdep.2009.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Brooks-Crozier J. Nature and Nurture of Violence: Early Intervention Services for the Families of MAOA-Low Children as a Means to Reduce Violent Crime and the Costs of Violent Crime, The. Conn Law Rev. 2011;44:531. [Google Scholar]
  • 4.Blair RJR. The neurobiology of psychopathic traits in youths. Nat Rev Neurosci. 2013;14:786–799. doi: 10.1038/nrn3577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Viding E, et al. Psychopathy. Curr Biol. 2014;24:R871–4. doi: 10.1016/j.cub.2014.06.055. [DOI] [PubMed] [Google Scholar]
  • 6.Buckholtz JW. Social norms, self-control, and the value of antisocial behavior. Current Opinion in Behavioral Sciences. 2015;3:122–129. [Google Scholar]
  • 7.Nadelhoffer T, et al. Neuroprediction, Violence, and the Law: Setting the Stage. Neuroethics. 2012;5:67–99. doi: 10.1007/s12152-010-9095-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Nadelhoffer T, Sinnott-Armstrong W. Neurolaw and neuroprediction: Potential promises and perils. Philosophy Compass. 2012;7:631–642. [Google Scholar]
  • 9.Monahan J, Skeem JL. Risk Assessment in Criminal Sentencing. Annu Rev Clin Psychol. 2016;12:489–513. doi: 10.1146/annurev-clinpsy-021815-092945. [DOI] [PubMed] [Google Scholar]
  • 10.Meehl PE. Clinical vs. statistical prediction. Minneapolis: University of Minnesota Press; 1954. [Google Scholar]
  • 11.Meehl PE. Causes and effects of my disturbing little book. J Pers Assess. 1986;50:370–375. doi: 10.1207/s15327752jpa5003_6. [DOI] [PubMed] [Google Scholar]
  • 12.Aegisdottir S. Should I Pack My Umbrella? Clinical Versus Statistical Prediction of Mental Health Decisions. Couns Psychol. 2006;34:410–419. [Google Scholar]
  • 13.Skeem JL, Monahan J. Current directions in violence risk assessment. Curr Dir Psychol Sci. 2011;20:38–42. [Google Scholar]
  • 14.Yang M, et al. The efficacy of violence prediction: a meta-analytic comparison of nine risk assessment tools. Psychol Bull. 2010;136:740–767. doi: 10.1037/a0020473. [DOI] [PubMed] [Google Scholar]
  • 15.Kroner DG, et al. A coffee can, factor analysis, and prediction of antisocial behavior: the structure of criminal risk. Int J Law Psychiatry. 2005;28:360–374. doi: 10.1016/j.ijlp.2004.01.011. [DOI] [PubMed] [Google Scholar]
  • 16.Singh JP, et al. A comparative study of violence risk assessment tools: a systematic review and metaregression analysis of 68 studies involving 25,980 participants. Clin Psychol Rev. 2011;31:499–513. doi: 10.1016/j.cpr.2010.11.009. [DOI] [PubMed] [Google Scholar]
  • 17.Petroianu A. Diagnosis of acute appendicitis. Int J Surg. 2012;10:115–119. doi: 10.1016/j.ijsu.2012.02.006. [DOI] [PubMed] [Google Scholar]
  • 18.Hamilton M. Risk-Needs Assessment: Constitutional and Ethical Challenges. 2015. [DOI] [Google Scholar]
  • 19.Starr SB. Evidence-based sentencing and the scientific rationalization of discrimination. Stanford Law Rev. 2014;66:803. [Google Scholar]
  • 20.Farahany NA. Neuroscience and behavioral genetics in US criminal law: an empirical analysis. J Law Biosci. 2015;2:485–509. doi: 10.1093/jlb/lsv059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Monahan J. Bioprediction, Biomarkers, and Bad Behavior. Oxford University Press; 2013. The Inclusion of Biological Risk Factors in Violence Risk Assessments. [Google Scholar]
  • 22.Rietveld CA, et al. GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science. 2013;340:1467–1471. doi: 10.1126/science.1235488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ripke S, et al. Genome-wide association analysis identifies 13 new risk loci for schizophrenia. Nat Genet. 2013;45:1150–1159. doi: 10.1038/ng.2742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ioannidis JPA. Why most discovered true associations are inflated. Epidemiology. 2008;19:640–648. doi: 10.1097/EDE.0b013e31818131e7. [DOI] [PubMed] [Google Scholar]
  • 25.Poldrack RA, et al. Long-term neural and physiological phenotyping of a single human. Nat Commun. 2015;6:8885. doi: 10.1038/ncomms9885. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Baskin-Sommers A, et al. Psychopathic individuals exhibit but do not avoid regret during counterfactual decision making. Proc Natl Acad Sci U S A. 2016;113:14438–14443. doi: 10.1073/pnas.1609985113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Rodman AM, et al. Selective Mapping of Psychopathy and Externalizing to Dissociable Circuits for Inhibitory Self-Control. Clin Psychol Sci. 2016;4:559–571. doi: 10.1177/2167702616631495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Monahan J, et al. Rethinking Risk Assessment: The MacArthur Study of Mental Disorder and Violence. Oxford University Press; 2001. [Google Scholar]
  • 29.Gottesman II, Gould TD. The endophenotype concept in psychiatry: etymology and strategic intentions. Am J Psychiatry. 2003;160:636–645. doi: 10.1176/appi.ajp.160.4.636. [DOI] [PubMed] [Google Scholar]
  • 30.Ermer E, et al. Aberrant paralimbic gray matter in criminal psychopathy. J Abnorm Psychol. 2012;121:649–658. doi: 10.1037/a0026371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Yang Y, et al. Morphological alterations in the prefrontal cortex and the amygdala in unsuccessful psychopaths. J Abnorm Psychol. 2010;119:546–554. doi: 10.1037/a0019611. [DOI] [PubMed] [Google Scholar]
  • 32.Veit R, et al. Brain circuits involved in emotional learning in antisocial behavior and social phobia in humans. Neurosci Lett. 2002;328:233–236. doi: 10.1016/s0304-3940(02)00519-0. [DOI] [PubMed] [Google Scholar]
  • 33.Birbaumer N, et al. Deficient fear conditioning in psychopathy: a functional magnetic resonance imaging study. Arch Gen Psychiatry. 2005;62:799–805. doi: 10.1001/archpsyc.62.7.799. [DOI] [PubMed] [Google Scholar]
  • 34.Harenski CL, et al. Aberrant neural processing of moral violations in criminal psychopaths. J Abnorm Psychol. 2010;119:863–874. doi: 10.1037/a0020979. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Glenn AL, et al. The neural correlates of moral decision-making in psychopathy. Mol Psychiatry. 2009;14:5–6. doi: 10.1038/mp.2008.104. [DOI] [PubMed] [Google Scholar]
  • 36.Decety J, et al. An fMRI study of affective perspective taking in individuals with psychopathy: imagining another in pain does not evoke empathy. Front Hum Neurosci. 2013;7:489. doi: 10.3389/fnhum.2013.00489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Motzkin JC, et al. Reduced prefrontal connectivity in psychopathy. J Neurosci. 2011;31:17348–17357. doi: 10.1523/JNEUROSCI.4215-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Decety J, et al. Brain response to empathy-eliciting scenarios involving pain in incarcerated individuals with psychopathy. JAMA Psychiatry. 2013;70(6):638–645. doi: 10.1001/jamapsychiatry.2013.27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Decety J, et al. Neural processing of dynamic emotional facial expressions in psychopaths. Soc Neurosci. 2014;9:36–49. doi: 10.1080/17470919.2013.866905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hyde LW, et al. Amygdala reactivity and negative emotionality: divergent correlates of antisocial personality and psychopathy traits in a community sample. J Abnorm Psychol. 2014;123:214–224. doi: 10.1037/a0035467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Carré JM, et al. The neural signatures of distinct psychopathic traits. Soc Neurosci. 2013;8:122–135. doi: 10.1080/17470919.2012.703623. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Craig MC, et al. Altered connections on the road to psychopathy. Mol Psychiatry. 2009;14:946–53. 907. doi: 10.1038/mp.2009.40. [DOI] [PubMed] [Google Scholar]
  • 43.Blair RJR. The amygdala and ventromedial prefrontal cortex in morality and psychopathy. Trends Cogn Sci. 2007;11:387–392. doi: 10.1016/j.tics.2007.07.003. [DOI] [PubMed] [Google Scholar]
  • 44.Viding E, et al. Antisocial and Callous Behaviour in Children. In: Miczek KA, Meyer-Lindenberg A, editors. Neuroscience of Aggression. Springer; Berlin Heidelberg: 2013. pp. 395–419. [DOI] [PubMed] [Google Scholar]
  • 45.Buckholtz JW, et al. Mesolimbic dopamine reward system hypersensitivity in individuals with psychopathic traits. Nat Neurosci. 2010;13:419–421. doi: 10.1038/nn.2510. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Bjork JM, et al. Incentive-elicited mesolimbic activation and externalizing symptomatology in adolescents. J Child Psychol Psychiatry. 2010;51:827–837. doi: 10.1111/j.1469-7610.2009.02201.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Bjork JM, et al. Psychopathic tendencies and mesolimbic recruitment by cues for instrumental and passively obtained rewards. Biol Psychol. 2012;89:408–415. doi: 10.1016/j.biopsycho.2011.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Glenn AL, et al. Increased volume of the striatum in psychopathic individuals. Biol Psychiatry. 2010;67:52–58. doi: 10.1016/j.biopsych.2009.06.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Schiffer B, et al. Disentangling structural brain alterations associated with violent behavior from those associated with substance use disorders. Arch Gen Psychiatry. 2011;68:1039–1049. doi: 10.1001/archgenpsychiatry.2011.61. [DOI] [PubMed] [Google Scholar]
  • 50.Shmueli G. To Explain or to Predict? Stat Sci. 2010;25:289–310. [Google Scholar]
  • 51.Aharoni E, et al. Neuroprediction of future rearrest. Proc Natl Acad Sci U S A. 2013;110:6223–6228. doi: 10.1073/pnas.1219302110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Shannon BJ, et al. Premotor functional connectivity predicts impulsivity in juvenile offenders. Proc Natl Acad Sci U S A. 2011;108:11241–11245. doi: 10.1073/pnas.1108241108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Gabrieli JDE, et al. Prediction as a humanitarian and pragmatic contribution from human cognitive neuroscience. Neuron. 2015;85:11–26. doi: 10.1016/j.neuron.2014.10.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Doehrmann O, et al. Predicting treatment response in social anxiety disorder from functional magnetic resonance imaging. JAMA Psychiatry. 2013;70:87–97. doi: 10.1001/2013.jamapsychiatry.5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Goldstein-Piekarski AN, et al. Human amygdala engagement moderated by early life stress exposure is a biobehavioral target for predicting recovery on antidepressants. Proc Natl Acad Sci U S A. 2016;113:11955–11960. doi: 10.1073/pnas.1606671113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Kong XZ, et al. Individual differences in impulsivity predict head motion during magnetic resonance imaging. PLoS One. 2014;9:e104989. doi: 10.1371/journal.pone.0104989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Ganis G, et al. Lying in the scanner: covert countermeasures disrupt deception detection by functional magnetic resonance imaging. Neuroimage. 2011;55:312–319. doi: 10.1016/j.neuroimage.2010.11.025. [DOI] [PubMed] [Google Scholar]
  • 58.Uncapher MR, et al. Goal-Directed Modulation of Neural Memory Patterns: Implications for fMRI-Based Memory Detection. Journal of Neuroscience. 2015;35:8531–8545. doi: 10.1523/JNEUROSCI.5145-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.HD-200 Consortium. The ADHD-200 Consortium: A Model to Advance the Translational Potential of Neuroimaging in Clinical Neuroscience. Front Syst Neurosci. 2012;6:62. doi: 10.3389/fnsys.2012.00062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Brown MRG, et al. ADHD-200 Global Competition: diagnosing ADHD using personal characteristic data can outperform resting state fMRI measurements. Front Syst Neurosci. 2012;6:69. doi: 10.3389/fnsys.2012.00069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Carp J. On the plurality of (methodological) worlds: estimating the analytic flexibility of FMRI experiments. Front Neurosci. 2012;6:149. doi: 10.3389/fnins.2012.00149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Simmons JP, et al. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22:1359–1366. doi: 10.1177/0956797611417632. [DOI] [PubMed] [Google Scholar]
  • 64.Varoquaux G. Cross-validation failure: Small sample sizes lead to large error bars. Neuroimage. 2017 doi: 10.1016/j.neuroimage.2017.06.061. [DOI] [PubMed] [Google Scholar]
  • 65.NCI-NHGRI Working Group on Replication in Association Studies et al. Replicating genotype-phenotype associations. Nature. 2007;447:655–660. doi: 10.1038/447655a. [DOI] [PubMed] [Google Scholar]
  • 66.Emdin C, et al. Association of cardiovascular trial registration with positive study findings: Epidemiological Study of Randomized Trials (ESORT) JAMA Intern Med. 2015;175:304–307. doi: 10.1001/jamainternmed.2014.6924. [DOI] [PubMed] [Google Scholar]
  • 67.Dechartres A, et al. Association between trial registration and treatment effect estimates: a meta-epidemiological study. BMC Med. 2016;14:100. doi: 10.1186/s12916-016-0639-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Poldrack RA, et al. Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci. 2017;18:115–126. doi: 10.1038/nrn.2016.167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.NCI-NHGRI Working Group on Replication in Association Studies et al. Replicating genotype-phenotype associations. Nature. 2007;447:655–660. doi: 10.1038/447655a. [DOI] [PubMed] [Google Scholar]
  • 70.MacCoun R, Perlmutter S. Blind analysis: Hide results to seek the truth. Nature. 2015;526:187–189. doi: 10.1038/526187a. [DOI] [PubMed] [Google Scholar]
  • 71.Fazel S, et al. Prediction of violent reoffending on release from prison: derivation and external validation of a scalable tool. Lancet Psychiatry. 2016;3:535–543. doi: 10.1016/S2215-0366(16)00103-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Fazel S, et al. Identification of low risk of violent crime in severe mental illness with a clinical prediction tool (Oxford Mental Illness and Violence tool [OxMIV]): a derivation and validation study. Lancet Psychiatry. 2017;4:461–468. doi: 10.1016/S2215-0366(17)30109-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Steyerberg E. Clinical Prediction Models: A Practical Approach to Development, Validation, and Updating. Springer Science & Business Media; 2008. [Google Scholar]
  • 74.Hastie T, et al. The elements of statistical learning. NY: Springer; 2001. [Google Scholar]
  • 75.Kuhn M, Johnson K. Applied Predictive Modeling. Springer; New York: 2013. [Google Scholar]
  • 76.Harrell F. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. Springer; 2015. [Google Scholar]
  • 77.Copas JB. J R Stat Soc Series B Stat Methodol. 1983. Regression, prediction and shrinkage. [Google Scholar]
  • 78.Skocik M, et al. I TRIED A BUNCH OF THINGS: THE DANGERS OF UNEXPECTED OVERFITTING IN CLASSIFICATION. bioRxiv. 2016 Jan 01;:078816. [Google Scholar]
  • 79.Moffitt TE. Genetic and environmental influences on antisocial behaviors: evidence from behavioral-genetic research. Adv Genet. 2005;55:41–104. doi: 10.1016/S0065-2660(05)55003-X. [DOI] [PubMed] [Google Scholar]
  • 80.Viding E, et al. Evidence for substantial genetic risk for psychopathy in 7-year-olds. J Child Psychol Psychiatry. 2005;46:592–597. doi: 10.1111/j.1469-7610.2004.00393.x. [DOI] [PubMed] [Google Scholar]
  • 81.Viding E, et al. Heritability of antisocial behaviour at 9: do callous-unemotional traits matter? Dev Sci. 2008;11:17–22. doi: 10.1111/j.1467-7687.2007.00648.x. [DOI] [PubMed] [Google Scholar]
  • 82.Buckholtz JW, Meyer-Lindenberg A. Psychopathology and the human connectome: toward a transdiagnostic model of risk for mental illness. Neuron. 2012;74:990–1004. doi: 10.1016/j.neuron.2012.06.002. [DOI] [PubMed] [Google Scholar]
  • 83.Brunner HG, et al. Abnormal behavior associated with a point mutation in the structural gene for monoamine oxidase A. Science. 1993;262:578–580. doi: 10.1126/science.8211186. [DOI] [PubMed] [Google Scholar]
  • 84.Brunner HG, et al. X-linked borderline mental retardation with prominent behavioral disturbance: phenotype, genetic localization, and evidence for disturbed monoamine metabolism. Am J Hum Genet. 1993;52:1032–1039. [PMC free article] [PubMed] [Google Scholar]
  • 85.Cases O, et al. Aggressive behavior and altered amounts of brain serotonin and norepinephrine in mice lacking MAOA. Science. 1995;268:1763–1766. doi: 10.1126/science.7792602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Shih JC, et al. Monoamine oxidase: from genes to behavior. Annu Rev Neurosci. 1999;22:197–217. doi: 10.1146/annurev.neuro.22.1.197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Dorfman HM, et al. Neurobiological mechanisms for impulsive-aggression: the role of MAOA. Curr Top Behav Neurosci. 2014;17:297–313. doi: 10.1007/7854_2013_272. [DOI] [PubMed] [Google Scholar]
  • 88.Sabol SZ, et al. A functional polymorphism in the monoamine oxidase A gene promoter. Hum Genet. 1998;103:273–279. doi: 10.1007/s004390050816. [DOI] [PubMed] [Google Scholar]
  • 89.Parsian A, et al. Functional variation in promoter region of monoamine oxidase A and subtypes of alcoholism: haplotype analysis. Am J Med Genet B Neuropsychiatr Genet. 2003;117B:46–50. doi: 10.1002/ajmg.b.10017. [DOI] [PubMed] [Google Scholar]
  • 90.Guindalini C, et al. Association of MAO A polymorphism and alcoholism in Brazilian females. Psychiatr Genet. 2005;15:141–144. doi: 10.1097/00041444-200506000-00011. [DOI] [PubMed] [Google Scholar]
  • 91.Moffitt TE. The new look of behavioral genetics in developmental psychopathology: gene-environment interplay in antisocial behaviors. Psychol Bull. 2005;131:533–554. doi: 10.1037/0033-2909.131.4.533. [DOI] [PubMed] [Google Scholar]
  • 92.Contini V, et al. MAOA-uVNTR polymorphism in a Brazilian sample: further support for the association with impulsive behaviors and alcohol dependence. Am J Med Genet B Neuropsychiatr Genet. 2006;141B:305–308. doi: 10.1002/ajmg.b.30290. [DOI] [PubMed] [Google Scholar]
  • 93.Saito T, et al. Analysis of monoamine oxidase A (MAOA) promoter polymorphism in Finnish male alcoholics. Psychiatry Res. 2002;109:113–119. doi: 10.1016/s0165-1781(02)00013-6. [DOI] [PubMed] [Google Scholar]
  • 94.Lee SS. Deviant peer affiliation and antisocial behavior: interaction with Monoamine Oxidase A (MAOA) genotype. J Abnorm Child Psychol. 2011;39:321–332. doi: 10.1007/s10802-010-9474-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Williams LM, et al. A polymorphism of the MAOA gene is associated with emotional brain markers and personality traits on an antisocial index. Neuropsychopharmacology. 2009;34:1797–1809. doi: 10.1038/npp.2009.1. [DOI] [PubMed] [Google Scholar]
  • 96.Reti IM, et al. Monoamine oxidase A regulates antisocial personality in whites with no history of physical abuse. Compr Psychiatry. 2011;52:188–194. doi: 10.1016/j.comppsych.2010.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Beaver KM, et al. Monoamine oxidase A genotype is associated with gang membership and weapon use. Compr Psychiatry. 2010;51:130–134. doi: 10.1016/j.comppsych.2009.03.010. [DOI] [PubMed] [Google Scholar]
  • 98.Buckholtz JW, Meyer-Lindenberg A. Genetic Perspectives on the Neurochemistry of Human Aggression and Violence. 2014. [Google Scholar]
  • 99.Syagailo YV, et al. Association analysis of the functional monoamine oxidase A gene promoter polymorphism in psychiatric disorders. Am J Med Genet. 2001;105:168–171. doi: 10.1002/ajmg.1193. [DOI] [PubMed] [Google Scholar]
  • 100.Garpenstrand H, et al. A regulatory monoamine oxidase a promoter polymorphism and personality traits. Neuropsychobiology. 2002;46:190–193. doi: 10.1159/000067804. [DOI] [PubMed] [Google Scholar]
  • 101.Koller G, et al. No association between a polymorphism in the promoter region of the MAOA gene with antisocial personality traits in alcoholics. Alcohol Alcohol. 2003;38:31–34. doi: 10.1093/alcalc/agg003. [DOI] [PubMed] [Google Scholar]
  • 102.Zalsman G, et al. Relationship of MAO-A promoter (u-VNTR) and COMT (V158M) gene polymorphisms to CSF monoamine metabolites levels in a psychiatric sample of caucasians: A preliminary report. Am J Med Genet B Neuropsychiatr Genet. 2005;132B:100–103. doi: 10.1002/ajmg.b.30094. [DOI] [PubMed] [Google Scholar]
  • 103.Tochigi M, et al. Combined analysis of association between personality traits and three functional polymorphisms in the tyrosine hydroxylase, monoamine oxidase A, and catechol-O-methyltransferase genes. Neurosci Res. 2006;54:180–185. doi: 10.1016/j.neures.2005.11.003. [DOI] [PubMed] [Google Scholar]
  • 104.Vanyukov MM, et al. The MAOA promoter polymorphism, disruptive behavior disorders, and early onset substance use disorder: gene-environment interaction. Psychiatr Genet. 2007;17:323–332. doi: 10.1097/YPG.0b013e32811f6691. [DOI] [PubMed] [Google Scholar]
  • 105.Barnett JH, et al. Cognitive effects of genetic variation in monoamine neurotransmitter systems: a population-based study of COMT, MAOA, and 5HTTLPR. Am J Med Genet B Neuropsychiatr Genet. 2011;156:158–167. doi: 10.1002/ajmg.b.31150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Flint J, Munafò MR. Candidate and non-candidate genes in behavior genetics. Curr Opin Neurobiol. 2013;23:57–61. doi: 10.1016/j.conb.2012.07.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Tiihonen J, et al. Genetic background of extreme violent behavior. Mol Psychiatry. 2015;20:786–792. doi: 10.1038/mp.2014.130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Faigman DL, et al. Univ Chic Law Rev. 2014. Group to individual (G2i) inference in scientific expert testimony. [Google Scholar]
  • 109.Imrey PB, Philip Dawid A. A Commentary on Statistical Assessment of Violence Recidivism Risk. Statistics and Public Policy. 2015;2:1–18. [Google Scholar]
  • 110.Dawid P. Synthese. 2015. On individual risk. [Google Scholar]
  • 111.Faigman DL, et al. Gatekeeping Science: Using the Structure of Scientific Research to Distinguish Between Admissibility and Weight in Expert Testimony. Nw U L Rev. 2016;110:859–904. [Google Scholar]
  • 112.Treadway MT, Buckholtz JW. On the use and misuse of genomic and neuroimaging science in forensic psychiatry: current roles and future directions. Child Adolesc Psychiatr Clin N Am. 2011;20:533–546. doi: 10.1016/j.chc.2011.03.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Buckholtz JW, Faigman DL. Promises, promises for neuroscience and law. Curr Biol. 2014;24:R861–7. doi: 10.1016/j.cub.2014.07.057. [DOI] [PubMed] [Google Scholar]
  • 114.Aharoni E, et al. Predictive accuracy in the neuroprediction of rearrest. Soc Neurosci. 2014;9:332–336. doi: 10.1080/17470919.2014.907201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Heagerty PJ, Zheng Y. Survival model predictive accuracy and ROC curves. Biometrics. 2005;61:92–105. doi: 10.1111/j.0006-341X.2005.030814.x. [DOI] [PubMed] [Google Scholar]

RESOURCES