Skip to main content
G3: Genes | Genomes | Genetics logoLink to G3: Genes | Genomes | Genetics
. 2019 Sep 11;9(11):3727–3741. doi: 10.1534/g3.119.400598

Pitfalls and Remedies for Cross Validation with Multi-trait Genomic Prediction Methods

Daniel Runcie *,1, Hao Cheng
PMCID: PMC6829121  PMID: 31511297

Abstract

Incorporating measurements on correlated traits into genomic prediction models can increase prediction accuracy and selection gain. However, multi-trait genomic prediction models are complex and prone to overfitting which may result in a loss of prediction accuracy relative to single-trait genomic prediction. Cross-validation is considered the gold standard method for selecting and tuning models for genomic prediction in both plant and animal breeding. When used appropriately, cross-validation gives an accurate estimate of the prediction accuracy of a genomic prediction model, and can effectively choose among disparate models based on their expected performance in real data. However, we show that a naive cross-validation strategy applied to the multi-trait prediction problem can be severely biased and lead to sub-optimal choices between single and multi-trait models when secondary traits are used to aid in the prediction of focal traits and these secondary traits are measured on the individuals to be tested. We use simulations to demonstrate the extent of the problem and propose three partial solutions: 1) a parametric solution from selection index theory, 2) a semi-parametric method for correcting the cross-validation estimates of prediction accuracy, and 3) a fully non-parametric method which we call CV2*: validating model predictions against focal trait measurements from genetically related individuals. The current excitement over high-throughput phenotyping suggests that more comprehensive phenotype measurements will be useful for accelerating breeding programs. Using an appropriate cross-validation strategy should more reliably determine if and when combining information across multiple traits is useful.

Keywords: cross validation, Genomic Prediction, linear mixed model, multi-trait, GenPred, Shared Data Resources


Genomic Selection (GS) aims to increase the speed and accuracy of selection in breeding programs by predicting the genetic worth of candidate individuals or lines earlier in the selection process, or for individuals that cannot be directly phenotyped (Meuwissen et al. 2001; Hayes et al. 2009; Crossa et al. 2017). Genomic selection works by training statistical or Machine Learning models on a set of completely phenotyped and genotyped individuals, and then using the trained model to predict the genetic worth of unmeasured individuals. If the predictions are reasonably accurate, selection intensity can be increased either because the population size of candidate individuals is larger or their true genetic worth is estimated more accurately.

Predictions of genetic values are usually based only on the genotypes or pedigrees of the new individuals. However predictions can in some cases be improved by including measurements of “secondary” traits that may not be of direct interest but are easier or faster to measure (Thompson and Meyer 1986; Pszczola et al. 2013; Lado et al. 2018). This is one goal of multi-trait genomic prediction. Multi-trait prediction is most useful for increasing the accuracy of selection on a single focal trait when that trait has low heritability, the “secondary” traits have high heritability, and the genetic and non-genetic correlations between the traits are large and opposing (Thompson and Meyer 1986; Jia and Jannink 2012; Cheng et al. 2018). With the advent of cheap high-throughput phenotyping, there is great interest in using measurements of early-life or easily accessible traits to improve prediction of later-life or more expensive traits, and multi-trait prediction models are attractive methods for leveraging this information (Pszczola et al. 2013; Rutkoski et al. 2016; Fernandes et al. 2017; Lado et al. 2018).

A large number of genomic prediction methods are available, and the best model varies across systems and traits (Heslot et al. 2012; de Los Campos et al. 2013). Due to their complexity and often high-dimensional nature, genomic prediction methods are prone to overfitting and require regularization to perform well on new data. Therefore, comparing models based on their ability to fit existing data (ex. with R2) is unreliable; every candidate model could explain 100% of the variation in a typical-size dataset.

Instead, prediction models are generally compared by cross-validation (Meuwissen et al. 2001; Utz et al. 2000; Gianola and Schon 2016). The basic idea of cross-validation is to separate the model fitting and tuning process from the model evaluation process by using separate datasets for each (Hastie et al. 2009). This penalizes models that fit too closely to one data set at the expense of generalization. In this way, cross-validation is meant to accurately simulate the real-world usage of the model: predicting the genetic values of un-phenotyped individuals; i.e., those not available during the model fitting process itself. Rather than requiring new data per se, cross-validation works by splitting an existing dataset into non-overlapping “training” and “testing” partitions, fitting the candidate model to the former, and then evaluating it on its accuracy at predicting the latter. Common measures of accuracy include Pearson’s ρ or the square root of the average squared error (RMSE) (Daetwyler et al. 2013). This process of splitting, training, and predicting is typically repeated several times on the same dataset to get a combined or averaged measure of accuracy across different random partitions of the data.

Estimates of model accuracy by cross-validation are not perfect (Hastie et al. 2009). They are subject to sampling error as are any other statistic. They are also typically downwardly biased because smaller training datasets are used for the cross-validation than in the actually application of a model. However in typical cases, this downward bias is the same for competing models and thus does not impact model choice (Hothorn et al. 2005).

However, cross-validation can give upwardly biased estimates of model accuracy when misused due to various forms of “data-leakage” between the training and testing datasets, leading to overly optimistic estimates of model performance (Kaufman et al. 2012). Several potential mistakes in cross-validation experiments are well known:

  • Biased testing data selection. The individuals in the model testing partitions should have the same distribution of genetic (and environmental) relatedness to the training population as individuals in the remaining target population (Amer and Banos 2010; Daetwyler et al. 2013). For example, if siblings or clones are present in the data, they should not be split between testing and training partitions unless siblings or clones of individuals in the training partition are also at the same frequency in the target population. Similarly, if the goal is to predict into a diverse breeding population, the cross-validation should not be performed only within one F2 mapping population.

  • Overlap between the testing and training datasets. The observations used as testing data should be kept separate from the training data at all stages of the cross-validation procedure. For example, if data from individuals in the testing dataset are used to calculate estimated genetic values (EBVs) for model training, then the testing and training datasets are overlapping, even if the testing individuals themselves are excluded from model training (Amer and Banos 2010).

  • Pre-selection of features (e.g., markers) based on the full dataset before cross-validation. All aspects of model specification and training that rely on the observed phenotypes should be performed only on the training partitions, without respect to the testing partition. For example, if a large number of candidate markers are available but only a portion will be included in the final model, the selection of markers (i.e., features) should be done using only the training partition of phenotypes and the selection itself should be repeated each replicate of the cross-validation on each new training dataset. If the feature selection is only done once on the whole dataset before cross-validation begins, this can lead to biased estimates of model accuracy (Hastie et al. 2009).

If these mistakes are avoided, cross-validation generally works well for comparing among single-trait methods, and in some cases for multi-trait methods. However, our goal in this paper is to highlight a challenge with using cross-validation to choose between single-trait methods and multi-trait methods; specifically multi-trait methods that use information from “secondary” traits measured on the target individuals to inform the prediction of their focal trait(s). In this case, standard cross-validation approaches lead to biased results. As we discuss below, the source of bias is not data leakage between the training and testing data per se, but correlated errors with respect to the true genetic merit between the secondary traits in the training data and the focal train in the testing data. Note that this issue only occurs when the multiple traits are measured on the same individuals, and the traits share non-genetic covariance. When traits are measured on different individuals, the standard cross-validation approach is appropriate.

In the following sections, we first describe the opportunity offered by multi-trait genomic prediction models in this setting, and the challenge in evaluating them. We then develop a simulation study that highlights the extent of the problem. Next, we propose three partial solutions that lead to fairly consistent model selections between single and multi-trait models under certain situations. Finally, we draw conclusions on when this issue is likely to arise and when it can be safely ignored.

General Setting

Multi-trait genomic prediction is useful in two general settings: 1) When the overall value of an individual depends on each trait simultaneously (ex. fruit number and fruit size) and these traits are correlated, and 2) When a focal trait is difficult or expensive to measure on every individual, but other correlated traits are more readily available (Thompson and Meyer 1986; Pszczola et al. 2013; Lado et al. 2018). While multi-trait models are clearly necessary in the first setting, in the second the value of the secondary traits depends on several factors including i) the repeatability of the focal and secondary traits, ii) the correlations among the traits and the cause of the correlations (i.e., genetic vs. non-genetic), and iii) the relative expenses of collecting data on each trait.

Here we focus on the goal of predicting a single focal trait using information from both genetic markers (or pedigrees) and phenotypic information on other traits. Even within this context, there are also two distinct prediction settings: 1) Predicting the focal trait value for new individuals that are yet to be phenotyped for any of the traits, and 2) Predicting the focal trait value for individuals that have been partially phenotyped; phenotypic values for the secondary traits are known and we wish to predict the individual’s genetic value for the focal trait. These settings were described by (Burgueño et al. 2012) as CV1 and CV2, respectively, although those authors focused on multi-environment trials rather than single experiments with multiple traits per individual. The same naming scheme has since been extended to the more general multiple-trait prediction scenarios (Lado et al. 2018).

The key difference between CV1 and CV2-style multi-trait prediction is that in the former, the secondary traits help refine estimates of the genetic values of relatives of the individuals we wish to predict, while in the latter, the secondary traits provide information directly about the genetics of the target individuals themselves. This direct information on the target individuals is generally useful (as we demonstrate below). However, it comes with a cost for the evaluation of prediction accuracy by cross-validation. Since we do not know the true genetic values for the testing individuals, we must either use a model to estimate the genetic values or simply use their phenotypic value as a proxy. Unfortunately, if we use our genetic model to estimate these values, we are breaking the independence between the testing and training data, and therefore have biased estimates of cross-validation accuracy. On the other hand, if we simply use the phenotypic values of the focal trait as our predictand, these may be biased toward or away from the true genetic values depending on the non-genetic correlation between the focal and secondary traits. This leads to either over- or under-estimation of the prediction accuracy of our multi-trait models. In realistic scenarios, this can lead users to select worse models.

Materials and Methods

We used a simulation study to explore conditions when naive cross-validation experiments as described above lead to sub-optimal choices between single and multi-trait genomic prediction methods. Our simulations were designed to mimic the process of using cross-validation to compare single and multi-trait models based on their prediction accuracies. We repeated this simulation across scenarios with different genetic architectures for two traits: a single “focal” trait and a single “secondary” trait. Specifically, we modified the heritability and correlation structure of the two traits. These are the most important parameters for determining the relative efficiencies of single- and multi-trait prediction models (Thompson and Meyer 1986). Sample size and level of genomic relatedness will also affect the comparisons, but are likely to only quantitatively (but not qualitatively) change the relative performances of the models and the accuracy of cross-validation.

To make our simulations realistic, we based them on genomic marker data from 803 lines from a real wheat breeding program (Lopez-Cruz et al. 2015). We downloaded the genomic relationship matrix K based on 14,217 GBS markers from this population. We used this relationship matrix to generate a set of simulated datasets covering all combinations of the following parameters: the relative proportions of genetic and non-genetic variation for each trait (h2={0.2,0.6}), and the genetic and non-genetic correlations between the traits ρg={0,0.3,0.6}, ρR={0.6,0.4,0.2,0,0.2,0.4,0.6}, drawing trait values for each simulation from multivariate normal distributions. In particular, we set:

Y=U+E,UMN(0,K,G),EMN(0,In,R)G=[g11g12g21g22]=[h12ρgh1h2g12h22]R=[r11r12r21r22]=[(1h12)ρR(1h12)(1h22)r12(1h22)] (1)

where MN(.) is the Matrix normal distribution, Y=[y1,y2] are the phenotypic values for the two traits in the n individuals, U=[u1,u2] are the true genetic values for the two traits, and E=[e1,e2] are the true non-genetic deviations for the two traits. We repeated this process 500 times for each of the 42 combinations of the genetic architecture parameters and for each of the simulation settings we describe below. To improve the consistency of the simulations, we used the same draws from a standard-normal distribution for all 42 parameter combinations, but new draws for each of the 500 simulations.

After creating the 803 simulated individuals, we randomly divided them into a training partition and a testing partition. We arranged the rows of Y so that the testing individuals were first, and correspondingly partitioned K into:

K=[KnnKnoKonKoo]. (2)

Here and below, the subscript n refers to the testing partition (i.e., “new” individuals) and the subscript o refers to the training partition (i.e., “old” individuals). We use the hat symbol (^) to denote parameter estimates or predictions.

We then fit single- and multi-trait linear mixed models to the training data and used these model fits to predict the genetic values for the focal trait (trait 1) in the testing partition.

Specifically, for the single-trait method we fit a univariate linear mixed model to the training data yo1:

yo1=μ1+uo1+eo1,uo1N(0,g11Koo),eo1N(0,r11Ino) (3)

by Restricted Maximum Likelihood using the relmatlmer function of R package (Ziyatdinov et al. 2018) and extracted the BLUPs u^o1. Note: an expanded version of these derivations are provided in the Appendix. We then calculated predicted genetic values for the testing partition un1 as:

u^n1(1)|u^o1=KnoKoo1u^o1. (4)

For the multi-trait model, we stacked the vectors of the two traits in the training dataset into the vector yo=[yo1yo2] and fit:

yo=μ+uo+eo,uoN(0,GKoo),eoN(0,RIno) (5)

using the relmatLmer function, extracted estimates μ^=[μ^1,μ^2], G^, R^, and BLUPs u^o.

To make predictions of the genetic values for the focal trait in the testing partition in the CV1 case without use of yn2, we calculated:

u^n1(2)|u^o1=KnoKoo1u^o1 (6)

which has the same form as for the single trait model, but the input BLUPs u^o1 are different.

To make predictions of the genetic values for the focal trait in the testing partition in the CV2 case, using the phenotypic observations of the secondary trait yn2, we used a two step method. First, we estimated u^o above based on both traits in the training data. Then we combined these estimates with the observed phenotypes of the testing data to calculate genetic predictions for the testing data:

u^n1(3)|yn2,u^o=KnoKoo1u^o1+g^12(K1)nn(V^c)1(yn2μ^2KnoKoo1u^o2), (7)

where V^c=g^22(K1)nn+r^22In. This two-step method will be slightly less accurate than a one-step method that used yn2 during the estimation of u^o, but is much easier to implement in breeding programs because no genotype or phenotype data of the evaluation individuals is needed during the model training stage.

We measured the accuracy of these three predictions by calculating the correlation between the prediction u^n1(i) and three predictands over the 500 simulations:

  • un1: The true genetic value.

  • yn1: The phenotypic values of the testing individuals.

  • un1: The estimated genetic values of the validation individuals using the full dataset (including yn1).

For the second accuracy measure that uses phenotypic values as predictands, we “corrected“ the correlations by dividing by the true value of h2 to account for the larger variance of yn1 relative to un1. This impacts the denominator of the correlation (Daetwyler et al. 2013), but since it is the same across methods, does not impact their comparison.

As described below, we also simulated phenotypes for an additional set of individuals yx not included in either the validation or testing partitions. These individuals were selected to be close relatives of each of the validation partition individuals but experienced different micro-environments.

For each combination of genetic parameters, we declared the “best” prediction method to be the one with the highest average correlation with the true genetic values across the 500 simulations. Then we counted the proportion of the simulations in which this “best” method actually had the highest estimated accuracy when scored against yn1.

Data Availability

Scripts for running all simulations and analyses described here are available at https://github.com/deruncie/multiTrait_crossValidation_scripts. Supplemental material available at FigShare: https://doi.org/10.25387/g3.9762899.

Results

Although we ran simulations for two levels of heritability for the focal trait (h12={0.2,0.6}) we present results only for h12=0.2. This is the “most-difficult” setting for prediction–when the heritability of the trait is low–but also the setting when we would expect the greatest benefit of using multi-trait models. Results for h12=0.6 were qualitatively similar, but with higher overall prediction accuracies of all methods.

Accuracy of single and multi-trait methods in simulated data

With h12=0.2 the true accuracy of prediction was moderate for all methods (cor(u^n1,un1)0.40.6, Figure 1). Prediction accuracies for the single-trait method were constant across settings with different correlation structures because information from the secondary trait was not used.

Figure 1.

Figure 1

True prediction accuracy of single-trait and multi-trait prediction methods in simulated data. 500 simulations were run for each heritability of the secondary trait (h22={0.2,0.6}), and each combination of genetic and non-genetic correlation between the two traits (ρg={0,0.3,0.6},ρR={0.6,0.4,0.2,0,0.2,0,4,0.6}), all with h12=0.2. For each simulation, we used 90% of the individuals as training to fit linear mixed models (either single or multi-trait), predicted the genetic values of the remaining validation individuals, and then measured the Pearson’s correlation between the predicted (u^n1) and true (un1) genetic values. In the CV1 method, we used only information on the training individuals to calculate u^n1. In the CV2 method, we used the training individuals to calculate u^o and combined this with the observed phenotypes for the secondary trait on the validation individuals (yn2). Curves show the average correlation for each method across the 500 simulations. Ribbons show ±1.96×SE over the 500 simulations.

The “standard” muti-trait model (i.e., CV1-style) that used phenotypic information only on the training partition slightly out-performed the single-trait model in some settings, more-so when the genetic and non-genetic correlations between traits were large and opposing and when the genetic determinacy of the secondary trait was high (Thompson and Meyer 1986). However it performed slightly worse whenever the genetic and residual correlations between traits were low. This was caused by inaccuracy in the estimation of the two covariance parameters (g^12,r^12). Neither multi-trait model performed worse than the single-trait model when the true G and R matrices were used (Figure S1), which we also verified by calculating the expected prediction accuracies analytically (See Appendix). In real data, multi-trait models require estimating more (co)variance parameters and therefore can show reduced performance when data are limited.

The CV2-style multi-trait method, which leverages additional phenotypic information on the secondary trait from the testing partition itself, showed dramatic improvements in prediction accuracy whenever genetic correlations among traits were large, irregardless of the non-genetic correlation between the traits. This is similar to the benefits seen by (Rutkoski et al. 2016) and (Lado et al. 2018). When the heritability of the secondary trait was high, the improvement in prediction accuracy was particularly dramatic (increasing to ρ=0.6). This is the potential advantage of incorporating secondary traits into prediction methods. However, the CV2 method also requires estimating G and R, and its performance was lower than the single-trait method whenever both genetic and residual correlations were low.

Therefore, multi-trait methods will not always be useful and it is important to test the relative performance of the different methods in real breeding scenarios. Unfortunately, we never know the true genetic values (un1), and so must use proxy predictands to evaluate our methods in real data (Daetwyler et al. 2013; Legarra and Reverter 2018). In Figures 2A-B, we compare the prediction accuracies of the three methods using two candidate predictands: the observed phenotypic values (yn1) and estimated genetic values from a joint model fit to the complete dataset (un1).

Figure 2.

Figure 2

Estimated prediction accuracies based on candidate predictands. For the same set of simulations described in Figure 1, we estimated the prediction accuracies of the three methods using two different candidate predictands: (A) The observed phenotypic value yn1 for each training individual (with the correlation corrected by 1/h12), or (B) An estimate of the genetic value of each training individual based on BLUPs calculated using the complete phenotype data (un1). Solid lines in each panel show the average estimated accuracy for each method across the 500 simulations. Ribbons show ±1.96×SE over the 500 simulations. Dotted lines show the average true accuracy from Figure 1.

Using the observed phenotypic values (yn1) as the predictand, the estimated accuracy of both the single-trait and CV1-style multi-trait prediction methods consistently under-estimated their true prediction accuracies. This is expected because in this setting 80% of the phenotypic variation is non-genetic and cannot be predicted based on relatives alone. We therefore follow common practice to report a “corrected” estimate of the prediction accuracy by dividing by h2 in Figure 2A. This correction factor itself must be estimated in real data, but when comparing models the same value of h^2 should be used for each model so that differences in these estimates do not bias model selection.

In contrast, the estimated accuracy of the CV2-style multi-trait method varied dramatically across simulated datasets. We tended to overestimate the true accuracy when both genetic and non-genetic correlations were large and in the same direction, and dramatically underestimate the true accuracy when the two correlations were opposing. Importantly, there are situations where the CV2-style method appears to perform worse than the single-trait method based on yn1 but actually performs better. Therefore, the observed phenotypic values are not reliable predictands to evaluate CV2-style methods when the intent is to estimate true genetic values and ρR0.

On the other hand, using estimated genetic values from a joint model fit to the complete dataset (un1) as the predictand led to dramatic over-estimation of the true prediction accuracy for all methods. This is also expected because the training data are used both to train the prediction model and also to create the testing dataset, a clear violation of the cross-validation rules that these datasets must be kept separate at all stages of the analysis. Again, the bias was most severe for the CV2-style method. Since this method is clearly invalid, we do not consider it further.

Effects of predictand on model selection

To demonstrate the impact of biased estimates of model accuracy using yn1 on the effectiveness of model selection, we assessed in each simulation whether the single-trait or multi-trait methods had a higher estimated accuracy, and compared this result to the true difference in prediction accuracies in that simulation setting.

Figure 3 shows that selecting between the single-trait and CV1-style multi-trait models based on estimated accuracy using yn1 generally works well. Whenever one method is clearly better, we are able to choose that method >50% of the time. But we never choose correctly <50% of the time, even when the methods are approximately equivalent.

Figure 3.

Figure 3

Impact of using phenotypic data to select between single-trait and multi-trait prediction methods. For each of the 500 simulations per genetic architecture described in Figure 1, we compared the estimated accuracy of a multi-trait prediction to the single-trait prediction. We then calculated the fraction of times that the selected model had higher average true accuracy in that setting (as shown in Figure 1).

In contrast, when selecting between the single-trait and CV2-style multi-trait methods based on estimated accuracy using yn1, the differential bias in estimated accuracy between the two methods frequently lead to sub-optimal model selection (Figure 3B). With opposing genetic and non-genetic covariances between the two traits, the better model was chosen <10% of the time. In these situations, using yn1 to select a prediction method will obscure real opportunities to enhance prediction accuracy using multi-trait prediction models.

Alternative estimates of multi-trait prediction accuracy

The CV2-style prediction method can be powerful because yn2 provides information on the genetic value of the testing individuals themselves (through un2), while yo1 only provides indirect information on the genetic values of the testing individuals through the relatives. However, estimating prediction accuracy using yn1 fails for the CV2-style prediction method because both the focal and secondary traits are observed on the same individual and therefore share the same non-genetic sources of variation. Since the CV2 method uses yn2, non-genetic deviations for the secondary trait en2 push u^n1 either toward or away from yn1 depending on the estimated correlation r^12. This either inflates or deflates the estimated accuracy, leading to incorrect model choices.

We now compare the effectiveness of three strategies for estimating cross-validation accuracy of CV2-style methods. To our knowledge, the second and third strategies are novel. Because the three methods have different data requirements, we implemented different experimental designs for each evaluation strategy.

Parametric estimate of accuracy:

Our prediction u^n1 is similar to a selection index because it combines multiple pieces of information into a linear prediction. The accuracy of an index I is: corg(I,y)hI2, the genetic correlation between the index and phenotype multiplied by the heritability of the index (Falconer and Mackay 1996; Lopez-Cruz et al. 2019). Neither the genetic correlation nor the heritability can be directly observed, but we can estimate both as parameters of a multi-trait linear mixed model with the same form as (5). To be a valid cross-validation score, these parameters must be estimated with data only in the validation partition, rather than reusing estimates from model training. Since both model training and model evaluation equally require estimates of G and R, we divided the data 50:50 into training and validation partitions in each simulation, thus using 404 lines to train the prediction models and 403 lines to evaluate the prediction accuracy.

The parametric estimates of prediction accuracy for the CV2 method were less biased than the cor(u^n1(3),yn1), the non-parametric estimates using yn1 as a predictand (Figure 4A, compare to Figure 2). This led to more consistent model selections between the CV2 and single-trait methods (Figure 4B). However, the parametric approach still underestimated the accuracy of the CV2 method when the genetic and residual correlations were in opposite directions, leading to model selection accuracies <50%. This negative bias was due to poor estimation of G and R for the selection indices, given the limited sample sizes remaining after the data were partitioned.

Figure 4.

Figure 4

Parametric accuracy estimates. Estimated prediction accuracies and model selection accuracies for CV2-style methods using the parametric method. (A) Solid curves: estimates of prediction accuracy. Dashed curves: true prediction accuracy based on un1. Dotted curves: estimated prediction accuracy using yn1 from Figure 2A. Ribbons show ±1.96×SE over the 500 simulations. (B) Solid curves: Fraction of the 500 simulations in which the better method (between CV2 and single-trait) for predicting the true genetic values was correctly selected. Dotted curve: model selection based on the naive prediction accuracy.

Semi-parametric estimate of accuracy:

In principle, we can correct for the bias in the non-parametric accuracy estimate (cor(u^n1(3),yn1)) from the CV2-style method by calculating an adjustment factor based on the theoretical bias relative to the true accuracy (cor(u^n1(3),un1)). This is similar to the semi-parametric accuracy estimates presented by (Legarra and Reverter 2018), and the “correction” of accuracy estimates by 1/h2 used above to account for the difference in variance between yn1 and un1. As we derive in the Appendix, the difference between the true correlation from a CV2-style methods and its CV2 cross-validation estimate when a single secondary trait is used is:

g^12r21var(u^n1(3))var(yn1)tr(S(K1)nnV^c1Knn)n1. (8)

with Vc defined above and S=I11n. The bias is a function of the the correlation among traits through the product g^12r21 (as the second term does not involve these parameters, and in most cases is 1), and is large and positive (i.e., accuracy is overestimated) when g^12 and r12 are large and in the same direction, and large and negative (i.e., accuracy is underestimated) when these covariances are in opposite directions. Given this result, we can correct cor(u^n1(3),yn1) by subtracting 8 from the estimated correlation, again corrected by 1/h2 (Figure 5).

Figure 5.

Figure 5

Semi-parametric accuracy estimates. Estimated prediction accuracies and model selection accuracies for CV2-style methods after semi-parametric correction. (A) Solid curves: corrected estimates of prediction accuracy. Dashed curves: uncorrected estimates of prediction accuracy based on yn1 (mirroring Figure 3). Dotted curves: true prediction accuracy. Ribbons show ±1.96×SE over the 500 simulations. (B) Solid curves: Fraction of the 500 simulations in which the better method (between CV2 and single-trait) for predicting the true genetic values was correctly selected. Dotted curve: model selection based on the naive un-corrected prediction accuracy.

Clearly, the quality of this correction will depend on the accuracy of g^12 and r^12 as estimates of g12 and r12. In Figure 5A, we show that the corrected correlation estimate has greatly reduced bias, particularly the dependence of the bias on the non-genetic covariance between the traits r12. However the correction is not perfect. Corrected accuracy estimates tend to overestimate the true accuracy. This over-estimation is caused by error in G^ and R^ as estimates of the true covariances: The correction factor is nearly perfect when the true covariance matrices are used in place of their estimates (Figure S2).

Using the semi-parametric accuracy estimates, we are more successful at selecting the best model over the range of genetic architectures (Figure 5B). The frequency of selecting the correct model rarely drops below 50% and is relatively constant with respect to the residual correlation between traits.

CV2* cross-validation strategy:

Since the biased estimate of prediction accuracy for CV2-style methods is due to non-genetic correlations between yn2 used for prediction and the predictand yn1, an alternative strategy, which we call CV2*, is to use phenotypic information on close relatives of the testing individuals (yx1) to validate the model predictions in place of their own focal trait phenotypes (yn1). These “surrogate” validation individuals must also be excluded from the model training and raised so that they do not share the same non-genetic deviations as the testing individuals: cor(ex1,en1)=0. Therefore, u^x1 will not be artificially pushed toward or away from ux1 (measured on relatives) by yn2 (measured on testing individuals), preventing this source of bias in the estimated accuracy.

We implemented the CV2* cross-validation strategy in two ways, simulating two different breeding schemes.

First, we considered the situation common in plant breeding where inbred lines (i.e., clones) are tested, and each line is grown in several plots in a field Bernardo (2002). Here, we can use one set of clones for prediction (yn2), and the other set of clones as trait-1 surrogates (yx1). Since they are clones, ux1=un1 and yy2 is just as good for predicting ux1 as yx2. Generally in this type of experiment, replicate plots of each line will be combined prior to analysis into a single line mean (or BLUP). But since we require yn2 and yx1 to be recorded from separate individuals, each value will have 2× the residual variance because it is based on 1/2 as much data as the line means used for model training. Therefore, in our simuulations we drew two independent residual values for each line in the validation partition, each with a variance of 2R. For these simulations, we used a 90:10 training:validation split.

Second, we considered the situation more common in animal breeding where clones are not available. In this case, the best option for CV2* would be to select pairs of closely related individuals to include in the training set; we use the first individual of the pair as yn2 and the second as yx1. To implement this strategy, we again started with a validation partition of 10% of the lines. Then for each line, we selected the most closely related remaining line (arg maxjKij for validation line i) and held this additional set of 10% of the lines as yx1. This left a training partition with only 80% of the lines. The average genetic relatedness of validation partition pairs in these simulations was 0.38.

Figure 6A shows that for the first setting with split clones, estimates of prediction accuracy for CV2-style predictions by CV2* are vastly more accurate than the naive estimates based on yn1, but they are slightly downwardly biased because of the increased residual variance of yn1 and yx2. Model selection works fairly well across all settings when clones are used (Figure 6B, blue lines), although with slightly lower success rates than for the semi-parametric method. However, when we implementing the second approach with nearest relatives (not clones), model selection was rarely successful - we consistently chose the wrong model across most simulation settings unless the genetic and residual correlations were opposing. This is because the validation pairs were too distantly related to provide any additional information on genetic merit relative to individuals in the training partition. Interestingly, this method is relatively successful in the situations where the parametric method fails (see Figure 4B), and so may be complimentary.

Figure 6.

Figure 6

Non-parametric CV* accuracy estimates. Estimated prediction accuracies and model selection accuracies based on the phenotypic values of close relatives. (A) Solid curves: Estimated prediction accuracies of the CV2-style and Single-trait methods evaluated against yx1 using clones. Dashed curves: True prediction accuracies of each method. Ribbons show ±1.96×SE over the 500 simulations. (B) Solid curves: Fraction of the 500 simulations in which the better method (between CV2 and single-trait) for predicting the true genetic values was correctly selected based on the phenotypes of relatives of the testing individuals. Dotted curve: Fraction of correct models selected based on the naive estimator.

Discussion

Our study highlights a potential pitfall in using cross-validation to estimate the accuracy of multi-trait genomic prediction methods. When secondary traits are used to aid in the prediction of focal traits and these secondary traits are measured on the individuals to be tested, cross-validation evaluated against phenotypic observations can be severely biased and result in poor model choices. Unfortunately, we rarely know the true genetic value of any individual and therefore can only evaluate our models with phenotypic data (since multi-trait-derived estimated genetic values are even more severely biased as we demonstrated above (Figure 2B)). We cannot find earlier discussions of this problem in the literature. However a growing number of studies aim to use cheap or early-life traits to improve predictions of genetic worth for individuals in later-life traits (ex. Pszczola et al. 2013; Rutkoski et al. 2016; Fernandes et al. 2017; Lado et al. 2018). Therefore the issue is becoming more important.

The problematic bias in the cross-validation-based accuracy estimates is caused by non-genetic correlations between the predictors that we want to use (i.e., the secondary traits) and our best predictand (the phenotypic value of the trait in the testing individuals) – non-genetic correlations between two traits measured on the same individual are expected. However, in some cases this correlation is zero by construction, and standard cross-validation approaches can be valid. For example, in the original description of the CV2 cross-validation method by (Burgueño et al. 2012), each trait was measured in a different environment. In this case, the traits were measured on different individuals and therefore did not share any non-genetic correlation. Also, CV1-style methods do not suffer from this problem because phenotypic information on the secondary traits in the testing individuals is not used for prediction. Similarly, this bias does not occur when the target of prediction is the phenotypic value itself (rather than the individual’s genetic value). For example, in medical genetics the aim is to predict whether or not a person will get a disease or not, not her genetic propensity to get a disease had she been raised in a different environment (ex Spiliopoulou et al. 2015; Dahl et al. 2016).

We note that the common strategy of two-step genome selection: using single-trait methods to calculate estimated genetic values for each line:trait and then using these estimated genetic values as training (and validation) data, does not get around the problem identified here. Using estimated genetic values instead of phenotypic values will tend to increase the genetic repeatability of the training and validation values, and therefore increase the overall prediction accuracy of all methods. But these estimated genetic values will still be biased by the non-genetic variation, and the biases across traits will still be correlated by the non-genetic correlations. Therefore the same issue will arise.

Also, while we have used a GBLUP-like genomic prediction method for the analyses presented here, the same result will hold for any multi-trait prediction method that aims to use information from yn2 when there are non-genetic correlations with yn1, i.e., any method that is evaluated with the CV2 cross-validation method on multiple traits measured on the same individual (Calus and Veerkamp 2011; Jia and Jannink 2012; Fernandes et al. 2017). This includes multi-trait versions of the Bayes Alphabet methods (Calus and Veerkamp 2011; Cheng et al. 2018), or neural network or Deep Learning methods (Montesinos-López et al. 2018).

We presented three partial solutions to this problem, spanning from fully parametric to fully non-parametric.

The parametric solution relies on fitting a new multi-trait mixed model to the predicted values and the predictand, with the accuracy estimated as the genetic correlation scaled by the heritability of the prediction. This solution is always available as long as the individuals in the validation partition have non-zero genomic relatedness and the full dataset is large enough to estimate genetic correlations in both training and validation partitions. However it generally worked poorly in our simulations because G and R were not estimated accurately. It may work better with very large datasets. Also, because this parametric approach relies on the same assumptions about the data (i.e., multivariate normality) as the prediction model, it loses some of the guarantees of reliability that completely non-parametric cross-validation methods can claim.

The semi-parametric solution aims to correct the non-parametric correlation estimate for the bias caused by the non-null residual correlation among traits. This correction factor is only needed for CV2-style multi-trait prediction approaches, and is similar to the approach of (Legarra and Reverter 2018) for single-trait models. We show that this correction factor can work well, particularly if the covariances among traits are well estimated. We only derived this correction method for prediction methods based on linear mixed effect models with a single known genetic covariance structure (i.e., GBLUP and RKHS-style methods with fixed kernels), although the approximation g^12r^12(var(u^)var(u) will probably be approximately correct for other methods. However, when covariances are poorly estimated, the correction factor can still lead to biased estimates of model accuracy. We are currently investigating whether Bayesian methods that sample over this uncertainty can be useful, and will implement this method in JWAS (Cheng et al. 2018). This method is semi-parametric, so also relies on distributional assumptions about the data and may fail when these assumptions are not met.

As a third alternative, we proposed the CV2* cross-validation method, a fully non-parametric approach for assessing CV2-style multi-trait prediction accuracy. CV2* uses phenotypic values of the focal trait from relatives of the testing individuals in place of the phenotypic values of that trait from the testing individuals themselves. If the close relatives are raised independently, they will not share non-genetic variation, removing the source of bias in the cross-validation estimate (Figure 6A). The CV2* method works best when clones of the testing individuals are available. With clones, secondary trait phenotypes of the testing individuals can be used directly to predict focal trait genetic values of their clones because the genetic values are identical. Replicates of inbred lines are frequently used in plant breeding trials (Bernardo 2002). In this case, all replicates should be held-out as a group from the training data. Then the replicates can be partitioned again into two sets; secondary trait phenotypes from one set can be incorporated into the genetic value predictions for the lines, and these predictions evaluated against the phenotypic values of the other set. To compare this estimate of CV2-style prediction accuracy to the prediction accuracy for a single-trait method, the single-trait method’s predictions should be compared against the same set of replicates of each line (i.e., not a joint average over all replicates of the line as would be typical for single-trait cross-validation). However, because of the separation of the replicates, each replicate will have higher residual variance, which reduces the accuracy of this method. Clones are less common outside of plant breeding, so more distant relatives need to be used instead. In this case, the estimated prediction accuracies of CV2-style methods will be downwardly biased. In our simulations, despite relatively close relatives for each validation line being available, this approach was not successful.

In our simulations, the semi-parametric approach was the most reliable, and the fully parametric approach the least reliable. However the fully parametric approach is always possible to implement while our semi-parametric and non-parametric approaches may not be possible depending on the prediction model used and the structure of the experimental design.

CONCLUSIONS

We expect that multi-trait methods for genomic prediction carry great promise to accelerate both plant and animal breeding. However there is a need to design better methods to evaluate and train the prediction methods to ensure that models can be accurately compared. We have presented and compared three contrasting methods to evaluating multi-trait methods. Each of these methods is preferred to naive cross-validation when secondary traits of the target individuals are used to predict their focal traits. However, the methods can give contrasting answers for different datasets, so careful consideration of which evaluation method to use is critical when choosing among prediction methods.

Acknowledgments

We would like to thank Erin Calfee and Graham Coop for suggesting the CV2* method, Gustavo de los Campos for pointing us toward the parametric approach, and helpful comments from two anonymous reviewers. HC’s work is support by US Department of Agriculture, Agriculture and Food Research Initiative National Institute of Food and Agriculture Competitive Grant No. 2018-67015-27957 DER was supported by the United States Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA), Hatch project 1010469.

Appendix

Here, we derive the genomic predictions u^n1 given y for the three prediction models that we use in the main text, and then evaluate the expected covariances between these predictions and the predictands un1 and yn1. We derive these relations for the more general situation with p1 “secondary” traits and a single “focal” trait.

We start with a phenotypic data matrix Y with n individuals and p+1 traits, where the first trait (first column of Y) is the “focal” trait, and the other p traits are “secondary” traits. We first divide Y into a training partition (“old” individuals) and a testing partition (“new” individuals), and arrange them with the testing partition first, so we can partition Y=[YnYo]=[[yn1Yn2][yo1Yo2]]. We then work with stacked versions of these phenotype matrices: y=vec(Y),yn=vec(Yn),yo=vec(Yo). Our genetic model for y is:

y=Xβ+u+e
β=[β1,β2]
uN(0,GK)
eN(0,RIn)

where G and R are genetic and phenotypic covariance matrices for the p+1 traits, and K is the n×n genomic relationship matrix among the lines. For convenience below, we partition the following matrices as follows: We partition the trait vectors for the training individuals and covariance matrices between the “focal” (index 1) and “secondary traits” (index 2):

yo=[yo1yo2],uo=[uo1uo2],eo=[eo1eo2],Xoβ=[Xo1β1Xo2β2]
G=[g11g12g21G22]=[g1G2]=[g1G2]
R=[r11r12r21R22]=[r1R2]=[r1R2],

where scalars are normal text, vectors are bold-face lower case letters, and matrices are bold-face capital letters. Partitions for the testing individuals are similar. We also partition the genomic relationship matrix and its inverse between the training and testing individuals:

K=[KnnKnoKonKoo],K1=[(K1)nn(K1)no(K1)on(K1)oo]

DERIVATION OF GENOMIC PREDICTIONS

Single trait predictions

For the single-trait prediction, we begin by estimating g^11, r^11 and β^1 by REML using only yo1. The joint distribution of un1 and yo1 is:

[un1yo1]N([0Xo1β1],[g11Knng11Knog11Kong11Koo+r11I]).

Let: Vo1=g11Koo+r11I. Therefore E[un1|yo1]=g11KnoVo11(yo1Xo1β1), so our prediction is:

u^n1(1)=g^11KnoV^o11(yo1Xo1β^1). (9)

To simplify, note that the joint distribution of uo1 and yo1 in the training data are:

[uo1yo1]N([0Xo1β1],[g11Koog11Koog11Koog11Koo+r11I])

Therefore, u^o1|yo1=g^11KooV^o11(yo1Xo1β^1). Rearranging and plugging this in above simplifies to: u^n1(1)=KnoKoo1u^o1.

CV1-style multi-trait predictions

For CV1-style multi-trait prediction, we begin by estimating G^, R^ and β^ by REML using yo. The joint distribution of un1 and yo is:

[un1yo]N([0Xoβ],[g11Knng1Knog1KonGKoo+RI])

Let Vo=GKoo+RI. Therefore, E[un1|yo]=(g1Kno)Vo1(yoXoβ), so our prediction is:

u^n1(2)=(g^1Kno)Vo1(yoXoβ^). (10)

As above, to simplify this expression, we form the joint distribution of uo and yo in the training data as:

[uoyo]N([0Xoβ],[GKooGKooGKooGKoo+RI])

Therefore, u^o1|yo=(G^Koo)V^o1(yoXoβ^). Rearranging and plugging this in above simplifies to: u^n1(2)=KnoKoo1u^o1.

CV2-style multi-trait predictions

For our CV2-style multi-trait prediction, we take a two-step approach. We first estimate u^o from the training individuals and then supplement this with yn2 from the testing individuals. The joint distribution of un1, yn2 and uo is:

[[un1yn2]uo]N[[[0X2β2]0],[GKnn+[000R22]InnGKnoGKonGKoo])

Conditional on a known value of uo from the training individuals, the distribution of [un1yn2] would be:

[un1yn2]|uoN([KnoKoo1uo1X2β2+KnoKoo1uo2],(GKnn)+[000R22]Inn[(GKno)(G1Koo1)(GKon)]),

which simplifies to:

[un1yn2]|uoN([KnoKoo1uo1X2β2+KnoKoo1uo2],[g11(K1)nn1g12(K1)nn1g21(K1)nn1G22(K1)nn1+R22Inn]).

Let Vc=G22(K1)nn1+R22Inn. Now, conditioning on observed values of both uo from the training data and yn2 from the testing data, the expectation of un1 would be:

E[un1|yn2,uo]=KnoKoo1uo1+(g12(K1)nn1)Vc1(yn2X2β2KnoKoo1uo2).

Using this, we form our prediction as:

u^n1(3)=KnoKoo1u^o1+(g^12(K1)nn1)V^c1(yn2X2β^2KnoKoo1u^o2), (11)

where u^o1 and u^o2 are extracted from the calculation of u^o for the CV1-style prediction. Plugging in the solutions for these values expands to:

u^n1(3)=(g^1Kno)V^o1(yoXoβ^)+(g^12(K1)nn1)V^c1(yn2X2β^2(G^2Kno)V^o1(yoXoβ^)).

Expectations of prediction accuracy

Now, we evaluate the expected correlation between a random sample of pairs of elements from our three candidate predictions and the predictand yn1. We compare these expected correlations with the expected “true” correlations with un1. Below, let var(x) denote the variance of a random sample from a random vector x; cov(x,y) and cor(x,y) denote the covariance and correlation between a random sample of pairs of elements from x and y; and Cov(x,y) denote the covariance matrix between vectors x and y. We use the following results:

cor(x,y)=cov(x,y)var(x)var(y)=1n1(xμx)(yμy)var(x)var(y)=1n1xSyvar(x)var(y)

where S=I11n.

E[xSy]=tr(SCov(x,y))+μxSμy=tr(SCov(x,y))

where tr() is the matrix trace, and μx=0 and/or μy=0. Therefore, the expected correlation between x and y is approximately:

E[cor(x,y)]1n1tr(SCov(x,y))E[var(x)]E[var(y)].

Our goal with cross-validation is to estimate cor(u^n1,un1). Since we do not know un1, we approximate the correlation with cor(u^n1,yn1)/h12. The factor of h12 corrects the correlation for the larger variance of yn1 relative to un1. Otherwise, any difference between these two correlations must be due to their numerators: tr(SCov(u^n1,un1)) and tr(SCov(u^n1,yn1)). Thus, for each of the three prediction methods we compare these two numerators to evaluate the accuracy and bias in the approximation.

Single trait predictions

The numerator of the expected correlation between un1(1) and the true genetic values un1 is:

tr(SCov(u^n1(1),un1))=tr(SCov(g^11KnoV^o11(yo1Xo1β^1),un1))=tr(g^11SKnoV^o11Cov(uo1+eo1,un1))=tr(g^11SKnoV^o11(g11Kon))=g^11g11tr(SKnoV^o11Kon).

where we assume that β^1=β1 and Cov(eo1,un1)=0. The same result for the numerator of the expected correlation between un1(1) and the observed phenotypic values yn1 is:

tr(SCov(u^n1(1),yn1))=tr(SCov(g^11KnoV^o11(yo1Xo1β^1),yn1))=tr(g^11SKnoV^o11Cov(uo1+eo1,un1+en1))=tr(g^11SKnoV^o11(g11Kon))=g^11g11tr(SKnoV^o11Kon),

where we additionally assume Cov(uo1,en1)=0 and Cov(eo1,en1)=0. Therefore, the numerators are the same, and cor(u^n1(1),yn1)/h^12 is a consistent estimator for cor(u^n1(1),un1).

CV1-style multi-trait predictions

The numerator of the expected correlation between un1(2) and the true genetic values un1 is:

tr(SCov(u^n(2),un1))=tr(SCov((g^1Kno)V^o1(yoXoβ^),un1))=tr(S(g^1Kno)V^o1Cov(uo+eo,un1))=tr(S(g^1Kno)V^o1(g1Kon)),

again assuming β^=β and now also Cov(eo,un1)=0. The same result for the numerator of the expected correlation between un1(2) and the observed phenotypic values yn1 is:

tr(SCov(u^n1(2),yn1))=tr(SCov((g^1Koo)V^o1(yoXoβ^),yn1))=tr(S(g^1Koo)V^o1Cov(uo+eo,un1+en1))=tr(S(g^1Koo)V^o1(g21Kon)),

where we additionally assume Cov(uo,en1)=0 and Cov(eo,en1)=0. Therefore, the numerators are the same, and cor(u^n1(2),yn1)/h^12 is a consistent estimator for cor(u^n1(2),un1).

CV2-style multi-trait predictions

The numerator of the expected correlation between un1(3) and the true genetic values un1 is:

tr(SCov(u^n1(3),un1))=tr(S[Cov((g^1Kno)V^o1(yoXoβ^)(g^12(K1)nn1)V^c1(G^2Kno)V^o1(yoXoβ^)+(g^12(K1)nn1)V^c1(yn2X2β^2),un1)])=tr(S[Cov((g^1Kno)V^o1(yoXoβ^),un1)Cov((g^12(K1)nn1)V^c1(G^2Kno)V^o1(yoXoβ^),un1)+Cov((g^12(K1)nn1)V^c1(yn2X2β^2),un1)])=tr(S[(g^1Kno)V^o1Cov(uo+eo,un1)(g^12(K1)nn1)V^c1(G^2Kno)V^o1Cov(uo+eo,un1)+(g^12(K1)nn1)V^c1Cov(un2+en2,un1)])=tr(S[(g^1Kno)V^o1(g1Kon)(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g1Kon)+(g^12(K1)nn1)V^c1(g21Knn)])=tr(S(g^1Kno)V^o1(g1Kon))tr(S(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g1Kon))+tr(S(g^12(K1)nn1)V^c1(g21Knn)),

again assuming β^=β, Cov(eo,un1)=0, and Cov(en2,un1)=0. From this, we can see the potential benefit of the CV2-style method:

tr(SCov(u^n1(3),un1))tr(SCov(u^n1(2),un1))=tr(S(g^12(K1)nn1)V^c1(g21Knn))tr(S(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g21Kon))=tr(S(g^12(K1)nn1)V^c1(g21Knn(G^2Kno)V^o1(g21Kon))),

which is generally (but maybe not necessarily) positive. This means that cor(u^n1(3),un1) is generally greater than cor(u^n1(2),un1).

The same result for the numerator of the expected correlation between un1(3) and the observed phenotypic values yn1 is:

tr(SCov(u^n1(3),yn1))=tr(S[Cov((g^1Kno)V^o1(yoXoβ^),un1+en1)Cov((g^12(K1)nn1)V^c1(G^2Kno)V^o1(yoXoβ^),un1+en1)+Cov((g^12(K1)nn1)V^c1(yn2X2β^2),un1+en1)])=tr(S[(g^1Kno)V^o1Cov(uo+eo,un1+en1)(g^12(K1)nn1)V^c1(G^2Kno)V^o1Cov(uo+eo,un1+en1)+(g^12(K1)nn1)V^c1Cov(un2+en2,un1+en1)])=tr(S[(g^1Kno)V^o1(g1Kon)(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g1Kon)+(g^12(K1)nn1)V^c1(g21Knn)])+(g^12(K1)nn1)V^c1(r21I)])=tr(S(g^1Kno)V^o1(g1Kon))tr(S(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g1Kon))+tr(S(g^12(K1)nn1)V^c1(g21Knn))+tr(S(g^12(K1)nn1)V^c1(r21Inn)),

From this, we see that the numerator of the correlation cor(u^n1(3),yn1) is not equal to that of cor(u^n1(3),un1):

tr(SCov(u^n1(3),yn1))tr(SCov(u^n1(3),un1))=tr(S(g^12(K1)nn1)V^c1(r21Inn)).

If p=1, then g^12 and r12 are scalars and this excess covariance is approximately ng^12r12.

CV2* approach

In our new CV2* cross-validation approach, we replace yn1 with yx1–the phenotypes of a new set of individuals (x) that are relatives of the testing partition and were not part of the training partition. Let Kxx be the genetic relationships among these nx individuals, and Kxo be their genetic relationships with the training partition. The numerator of the expected correlation cor(u^n1(3),yx1)/h12 is:

tr(SCov(u^n1(3),yx1))=tr(S[Cov((g^1Kno)V^o1(yoXoβ^),ux1+ex1)Cov((g^12(K1)nn1)V^c1(G^2Kno)V^o1(yoXoβ^),ux1+ex1)+Cov((g^12(K1)nn1)V^c1(yn2X2β^2),ux1+ex1)])=tr(S[(g^1Kno)V^o1Cov(uo+eo,ux1+ex1))(g^12(K1)nn1)V^c1(G^2Kno)V^o1Cov(uo+eo,ux1+ex1)+(g^12(K1)nn1)V^c1Cov(un2+en2,ux1+ex1)])=tr(S[(g^1Kno)V^o1(g21Kox)(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g21Kox)+(g^12(K1)nn1)V^c1(g21Kxx)])=tr(S(g^1Kno)V^o1(g21Kox))tr(S(g^12(K1)nn1)V^c1(G^2Kno)V^o1(g21Kox))+tr(S(g^12(K1)nn1)V^c1(g21Kxx)).

If these new individuals are clones of the original testing set, then Kxx=Knn, Kox=Kon and tr(SCov(u^n1(3),yx1))=tr(SCov(u^n1(3),un1)). However, if clones are not available, then this equality will not hold.

Given these analytical results for the numerator of the expected correlations, we can estimate the correlation itself by calculating the expected variances of u^n1 and un1 or yn1. We do not go through these calculations as they follow directly from the calculations given above.

Footnotes

Supplemental material available at FigShare: https://doi.org/10.25387/g3.9762899.

Communicating editor: G. de los Campos

Literature Cited

  1. Amer P. R., and Banos G., 2010.  Implications of avoiding overlap between training and testing data sets when evaluating genomic predictions of genetic merit. J. Dairy Sci. 93: 3320–3330. 10.3168/jds.2009-2845 [DOI] [PubMed] [Google Scholar]
  2. Bernardo R., 2002.  Breeding for Quantitative Traits in Plants, Stemma Press, Woodbury, MN. [Google Scholar]
  3. Burgueño J., de Los Campos G., Weigel K., and Crossa J., 2012.  Genomic Prediction of Breeding Values when Modeling Genotype × Environment Interaction using Pedigree and Dense Molecular Markers. Crop Sci. 52: 707 10.2135/cropsci2011.06.0299 [DOI] [Google Scholar]
  4. Calus M. P., and Veerkamp R. F., 2011.  Accuracy of multi-trait genomic selection using different methods. Genet. Sel. Evol. 43: 26 10.1186/1297-9686-43-26 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cheng H., Fernando R., and Garrick D., 2018. Jwas: Julia implementation of whole-genome analysis software. In Proceedings of the World Congress on Genetics Applied to Livestock Production, volume 11. [Google Scholar]
  6. Crossa J., Pérez-Rodríguez P., Cuevas J., Montesinos-López O., Jarquín D. et al. , 2017.  Genomic Selection in Plant Breeding: Methods, Models, and Perspectives. Trends Plant Sci. 22: 961–975. 10.1016/j.tplants.2017.08.011 [DOI] [PubMed] [Google Scholar]
  7. Daetwyler H. D., Calus M. P. L., Pong-Wong R., de Los Campos G., and Hickey J. M., 2013.  Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking. Genetics 193: 347–365. 10.1534/genetics.112.147983 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Dahl A., Iotchkova V., Baud A., Johansson Å., Gyllensten U. et al. , 2016.  A multiple-phenotype imputation method for genetic studies. Nat. Genet. 48: 466–472. 10.1038/ng.3513 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. de Los Campos G., Hickey J. M., Pong-Wong R., Daetwyler H. D., and Calus M. P. L., 2013.  Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding. Genetics 193: 327–345. 10.1534/genetics.112.143313 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Falconer D. S., and Mackay T. F. C., 1996.  Introduction to Quantitative Genetics, Ed. 4 Pearson, London, UK. [Google Scholar]
  11. Fernandes S. B., Dias K. O. G., Ferreira D. F., and Brown P. J., 2017.  Efficiency of multi-trait, indirect, and trait-assisted genomic selection for improvement of biomass sorghum. TAG Theoretical and applied genetics Theoretische und angewandte Genetik 131: 747–755. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gianola D. and Schon C. C., 2016.  Cross-Validation Without Doing Cross-Validation in Genome-Enabled Prediction. G3: Genes | Genomes | Genetics 6: 3107–3128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Hastie T., Tibshirani R., and Friedman J., 2009.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Ed. 2 Springer, New York, NY: 10.1007/978-0-387-84858-7 [DOI] [Google Scholar]
  14. Hayes B. J., Bowman P. J., Chamberlain A. J., and Goddard M. E., 2009.  Invited review: Genomic selection in dairy cattle: Progress and challenges. J. Dairy Sci. 92: 433–443. 10.3168/jds.2008-1646 [DOI] [PubMed] [Google Scholar]
  15. Heslot N., Yang H.-P., Sorrells M. E., and Jannink J.-L., 2012.  Genomic Selection in Plant Breeding: A Comparison of Models. Crop Sci. 52: 146–160. 10.2135/cropsci2011.06.0297 [DOI] [Google Scholar]
  16. Hothorn T., Leisch F., Zeileis A., and Hornik K., 2005.  The design and analysis of benchmark experiments. J. Comput. Graph. Stat. 14: 675–699. 10.1198/106186005X59630 [DOI] [Google Scholar]
  17. Jia Y., and Jannink J.-L., 2012.  Multiple-Trait Genomic Selection Methods Increase Genetic Value Prediction Accuracy. Genetics 192: 1513–1522. 10.1534/genetics.112.144246 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kaufman S., Rosset S., Perlich C., and Stitelman O., 2012.  Leakage in data mining: Formulation, detection, and avoidance. ACM Trans. Knowl. Discov. Data 6: 1–21. 10.1145/2382577.2382579 [DOI] [Google Scholar]
  19. Lado B., Vázquez D., Quincke M., Silva P., Aguilar I., et al. , 2018.  Resource allocation optimization with multi-trait genomic prediction for bread wheat (Triticum aestivum L.) baking quality. Theoretical and Applied Genetics 131: 2719–2731. 10.1007/s00122-018-3186-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Legarra A., and Reverter A., 2018.  Semi-parametric estimates of population accuracy and bias of predictions of breeding values and future phenotypes using the LR method. Genet. Sel. Evol. 50: 53 10.1186/s12711-018-0426-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lopez-Cruz M., Crossa J., Bonnett D., Dreisigacker S., Poland J., et al. , 2015.  Increased Prediction Accuracy in Wheat Breeding Trials Using a Marker × Environment Interaction Genomic Selection Model. G3: Genes | Genomes | Genetics 5: 569–582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Lopez-Cruz M., Olson E., Rovere G., Crossa J., Dreisigacker S. et al. , 2019.  Genetic image-processing using regularized selection indices. bioRxiv. doi: 10.1101/625251 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Meuwissen T. H., Hayes B. J., and Goddard M. E., 2001.  Prediction of total genetic value using genome-wide dense marker maps. Genetics 157: 1819–1829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Montesinos-López O. A., Montesinos-López A., Crossa J., Gianola D., Hernández-Suárez C. M., et al. , 2018.  Multi-trait, Multi-environment Deep Learning Modeling for Genomic-Enabled Prediction of Plant Traits. G3: Genes | Genomes | Genetics 8: 3829–3840. 10.1534/g3.118.200728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Pszczola M., Veerkamp R. F., de Haas Y., Wall E., Strabel T., et al. , 2013.  Effect of predictor traits on accuracy of genomic breeding values for feed intake based on a limited cow reference population. Animal 7: 1759–1768. [DOI] [PubMed] [Google Scholar]
  26. Rutkoski J., Poland J., Mondal S., Autrique E., Pérez L. G. et al. , 2016.  Canopy temperature and vegetation indices from high-throughput phenotyping improve accuracy of pedigree and genomic selection for grain yield in wheat. G3: Genes, Genomes. Genetics 6: 2799–2808. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Spiliopoulou A., Nagy R., Bermingham M. L., Huffman J. E., Hayward C. et al. , 2015.  Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models. Hum. Mol. Genet. 24: 4167–4182. 10.1093/hmg/ddv145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Thompson R., and Meyer K., 1986.  A review of theoretical aspects in the estimation of breeding values for multi-trait selection. Livest. Prod. Sci. 15: 299–313. 10.1016/0301-6226(86)90071-0 [DOI] [Google Scholar]
  29. Utz H. F., Melchinger A. E., and Schön C. C., 2000.  Bias and Sampling Error of the Estimated Proportion of Genotypic Variance Explained by Quantitative Trait Loci Determined From Experimental Data in Maize Using Cross Validation and Validation With Independent Samples. Genetics 154: 1839–1849. [PMC free article] [PubMed] [Google Scholar]
  30. Ziyatdinov A., Vazquez-Santiago M., Brunel H., Martinez-Perez A., Aschard H. et al. , 2018.  lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals. BMC Bioinformatics 19: 68 10.1186/s12859-018-2057-x [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Scripts for running all simulations and analyses described here are available at https://github.com/deruncie/multiTrait_crossValidation_scripts. Supplemental material available at FigShare: https://doi.org/10.25387/g3.9762899.


Articles from G3: Genes|Genomes|Genetics are provided here courtesy of Oxford University Press

RESOURCES