Skip to main content
Genetics, Selection, Evolution : GSE logoLink to Genetics, Selection, Evolution : GSE
. 2019 Jun 25;51:30. doi: 10.1186/s12711-019-0472-8

A second-level diagonal preconditioner for single-step SNPBLUP

Jeremie Vandenplas 1,, Mario P L Calus 1, Herwin Eding 2, Cornelis Vuik 3
PMCID: PMC6593613  PMID: 31238880

Abstract

Background

The preconditioned conjugate gradient (PCG) method is an iterative solver of linear equations systems commonly used in animal breeding. However, the PCG method has been shown to encounter convergence issues when applied to single-step single nucleotide polymorphism BLUP (ssSNPBLUP) models. Recently, we proposed a deflated PCG (DPCG) method for solving ssSNPBLUP efficiently. The DPCG method introduces a second-level preconditioner that annihilates the effect of the largest unfavourable eigenvalues of the ssSNPBLUP preconditioned coefficient matrix on the convergence of the iterative solver. While it solves the convergence issues of ssSNPBLUP, the DPCG method requires substantial additional computations, in comparison to the PCG method. Accordingly, the aim of this study was to develop a second-level preconditioner that decreases the largest eigenvalues of the ssSNPBLUP preconditioned coefficient matrix at a lower cost than the DPCG method, in addition to comparing its performance to the (D)PCG methods applied to two different ssSNPBLUP models.

Results

Based on the properties of the ssSNPBLUP preconditioned coefficient matrix, we proposed a second-level diagonal preconditioner that decreases the largest eigenvalues of the ssSNPBLUP preconditioned coefficient matrix under some conditions. This proposed second-level preconditioner is easy to implement in current software and does not result in additional computing costs as it can be combined with the commonly used (block-)diagonal preconditioner. Tested on two different datasets and with two different ssSNPBLUP models, the second-level diagonal preconditioner led to a decrease of the largest eigenvalues and the condition number of the preconditioned coefficient matrices. It resulted in an improvement of the convergence pattern of the iterative solver. For the largest dataset, the convergence of the PCG method with the proposed second-level diagonal preconditioner was slower than the DPCG method, but it performed better than the DPCG method in terms of total computing time.

Conclusions

The proposed second-level diagonal preconditioner can improve the convergence of the (D)PCG methods applied to two ssSNPBLUP models. Based on our results, the PCG method combined with the proposed second-level diagonal preconditioner seems to be more efficient than the DPCG method in solving ssSNPBLUP. However, the optimal combination of ssSNPBLUP and solver will most likely be situation-dependent.

Electronic supplementary material

The online version of this article (10.1186/s12711-019-0472-8) contains supplementary material, which is available to authorized users.

Background

Since its introduction in the late 1990s [1], the preconditioned conjugate gradient (PCG) method has been the method of choice to solve breeding value estimation models in animal breeding. Likewise, the systems of linear equations of the different single-step single nucleotide polymorphism BLUP (ssSNPBLUP) models are usually solved with the PCG method with a diagonal (also called Jacobi) or block-diagonal preconditioner [24]. Several studies [36] observed that the PCG method with such a preconditioner applied to ssSNPBLUP is associated with slower convergence. By investigating the reasons for these convergence issues, Vandenplas et al. [4] observed that the largest eigenvalues of the preconditioned coefficient matrix of ssSNPBLUP proposed by Mantysaari and Stranden [7], hereafter referred to as ssSNPBLUP_MS, resulted from the presence of the equations for single nucleotide polymorphism (SNP) effects. In their study, applying a deflated PCG (DPCG) method to ssSNPBLUP_MS solved the convergence issues [4]. In comparison to the PCG method, the DPCG method introduces a second-level preconditioner that annihilates the effect of the largest eigenvalues of the preconditioned coefficient matrix of ssSNPBLUP_MS on the convergence of the iterative solver. After deflation, the largest eigenvalues of the ssSNPBLUP_MS preconditioned deflated coefficient matrix were reduced and close to those of single-step genomic BLUP (ssGBLUP). As a result the associated convergence patterns of ssSNPBLUP were, at least, similar to those of ssGBLUP [4].

While it solves the convergence issues associated with ssSNPBLUP, the DPCG method requires the computation and storage of the so-called Galerkin matrix, which is a dense matrix that could be computationally expensive for very large evaluations and that requires some effort to be implemented in existing software. In addition, as implemented in Vandenplas et al. [4], each iteration of the DPCG method requires two multiplications of the coefficient matrix by a vector, instead of one multiplication for the PCG method. As a result, computing time per iteration with the DPCG method is roughly twice as long as with the PCG method. Accordingly, it is of interest to develop a second-level preconditioner that would reduce the largest eigenvalues of the preconditioned coefficient matrix of ssSNPBLUP at a lower cost than the DPCG method. As such, the aim of this study was to develop a second-level preconditioner that would decrease the unfavourable largest eigenvalues of the preconditioned coefficient matrix of ssSNPBLUP and to compare its performance to the DPCG method. The performance of the proposed second-level preconditioner was tested for two different ssSNPBLUP models.

Methods

Data

The two datasets used in this study, hereafter referred to as the reduced and field datasets, were provided by CRV BV (The Netherlands) and are the same as in Vandenplas et al. [4], in which these two datasets are described in detail.

Briefly, for the reduced dataset, the data file included 61,592 ovum pick-up sessions from 4109 animals and the pedigree included 37,021 animals. The 50K SNP genotypes of 6169 animals without phenotypes were available. A total of 9994 segregating SNPs with a minor allele frequency higher than or equal to 0.01 were randomly sampled from the 50K SNP genotypes. The number of SNPs was limited to 9994 to facilitate the computation and the analysis of the left-hand side of the mixed model equations. The univariate mixed model included random effects (additive genetic, permanent environmental and residual), fixed co-variables (heterosis and recombination) and fixed cross-classified effects (herd-year, year-month, parity, age in months, technician, assistant, interval, gestation, session and protocol) [8].

For the field dataset, the data file included 3,882,772 records with a single record per animal. The pedigree included 6,130,519 animals. The genotypes, including 37,995 segregating SNPs, of 15,205 animals without phenotypes and of 75,758 animals with phenotypes were available. The four-trait mixed model included random effects (additive genetic and residual), fixed co-variables (heterosis and recombination) and fixed cross-classified effects (herd x year x season at classification, age at classification, lactation stage at classification, milk yield and month of calving) [9, 10].

Single-step SNPBLUP models

In this study, we investigated two ssSNPBLUP linear equations systems. The first system was proposed by Mantysaari and Stranden [7] (ssSNPBLUP_MS). This system was also investigated in Vandenplas et al. [4]. The standard multivariate model associated with the ssSNPBLUP_MS system of equations can be written as:

y=Xβ+Wn000WgWgMzunagg+e

where the subscripts g and n refer to genotyped and non-genotyped animals, respectively, y is the vector of records, β is the vector of fixed effects, un is the vector of additive genetic effects for non-genotyped animals, ag is the vector of residual polygenic effects for genotyped animals, g is the vector of SNP effects and e is the vector of residuals. The matrices X, Wn and Wg are incidence matrices relating records to their corresponding effects. The matrix Mz is equal to Mz=ItZ, with It being an identity matrix with size equal to the number of traits t and the matrix Z containing the SNP genotypes (coded as 0 for one homozygous genotype, 1 for the heterozygous genotype, or 2 for the alternate homozygous genotype) centred by their observed means.

The system of linear equations for multivariate ssSNPBLUP_MS can be written as follows:

CMSxMS=bMS where

CMS=XR-1XXnRn-1WnXgRg-1WgXgRg-1WgMzWnRn-1XnWnRn-1Wn+ΣMS11ΣMS12ΣMS13WgRg-1XgΣMS21WgRg-1Wg+ΣMS22WgRg-1WgMz+ΣMS23MzWgRg-1XgΣMS31MzWgRg-1Wg+ΣMS32MzWgRg-1WgMz+ΣMS33

is a symmetric positive (semi-)definite coefficient matrix, xMS=β^u^na^gg^ is the vector of solutions, and

bMS=XR-1yWnRn-1ynWgRg-1ygMzWgRg-1yg

is the right-hand side with

R-1=Rn-100Rg-1

being the inverse of the residual (co)variance structure matrix. The matrix ΣMS-1 is the inverse of the covariance matrix associated with un, ag and g, and is equal to

ΣMS-1=ΣMS11ΣMS12ΣMS13ΣMS21ΣMS22ΣMS23ΣMS31ΣMS32ΣMS33=G0-1AnnAngAngZAgn1wAgg+1-1wQQZZAgnZQZQZ+m1-wI.

The matrix Q is equal to Q=AgnAnn-1Ang, with

A-1=AnnAngAgnAgg

being the inverse of the pedigree relationship matrix. The parameter w is the proportion of variance (due to additive genetic effects) considered as residual polygenic effects and m=2po1-po with po being the allele frequency of the oth SNP.

The second system of linear equations investigated in this study is the system of equations proposed by Gengler et al. [11] and Liu et al. [5], hereafter referred to as ssSNPBLUP_Liu. The system of linear equations for a multivariate ssSNPBLUP_Liu can be written as follows:

CLxL=bL

where

CL=XR-1XXnRn-1WnXgRg-1Wg0WnRn-1XnWnRn-1Wn+ΣL11ΣL12ΣL13WgRg-1XgΣL21WgRg-1Wg+ΣL22ΣL230ΣL31ΣL32ΣL33,
xL=β^u^nu^gg^,

and

bL=XR-1yWnRn-1ynWgRg-1yg0.

The matrix ΣL-1 is equal to

ΣL-1=ΣL11ΣL12ΣL13ΣL21ΣL22ΣL23ΣL31ΣL32ΣL33=G0-1AnnAng0Agn1wAgg+1-1wQ-1wAgg-1Z0-1wZAgg-11wZAgg-1Z+m1-wI

with Agg-1=Agg-Q.

It is worth noting that the absorption of the equations associated with g^ of ssSNPBLUP_Liu results in the mixed model equations of single-step genomic BLUP (ssGBLUP) for which the inverse of the genomic relationship matrix is calculated using the Woodbury formula [12]. Several studies (e.g., [1315]) investigated the possibility of using specific knowledge of a priori variances to weight differently some SNPs in ssGBLUP. Such approaches are difficult to extend to multivariate ssGBLUP, while they can be easily applied in ssSNPBLUP by replacing the matrix G0-1m1-wI by a symmetric positive definite matrix B that contains SNP-specific (co)variances obtained by, e.g., Bayesian regression [5].

In the following, matrix C will refer to either CMS or CL (and similarly for the vectors x and b). In addition, the matrices CMS and CL have the same structure, and both can be partitioned between the equations associated with SNP effects (S) and the equations associated with the other effects (O), as follows:

C=COOCOSCSOCSS.

From this partition, it follows that CMSOO=CLOO and that CSO, COS, and CSS are dense matrices.

The PCG method

The PCG method is an iterative method that uses successive approximations to obtain more accurate solutions for a linear system at each iteration step [16]. The preconditioned systems of the linear equations of ssSNPBLUP_MS and of ssSNPBLUP_Liu have the form:

M-1Cx=M-1b, 1

where M is a (block-)diagonal preconditioner.

In this study, the (block-)diagonal preconditioner M is defined as:

M=Mff00Mrr=diagCff00block_diagCrr

where the subscripts f and r refer to the equations associated with fixed and random effects, respectively and block_diagCrr is a block-diagonal matrix with blocks corresponding to equations for different traits within a level (e.g., an animal).

After k iterations of the PCG method applied to the Eq. (1), the error is bounded by [16, 17]:

x-x^kC2x-x^0CκCM-1κCM+1k

where CM=M-1C, xC is the C-norm of x, defined as xCx, and κCM is the effective spectral condition number of CM, that is defined as λmaxCMλminCM with λmaxCM (λminCM) being the largest (smallest) non-zero eigenvalue of CM.

The deflated PCG method

Vandenplas et al. [4] showed that the largest eigenvalues of the ssSNPBLUP_MS preconditioned coefficient matrix CM were larger than those of the ssGBLUP preconditioned coefficient matrix, while the smallest eigenvalues were similar. This resulted in larger effective condition numbers κCM and convergence issues for ssSNPBLUP_MS. As applied by Vandenplas et al. [4], the DPCG method annihilates the largest unfavourable eigenvalues of the ssSNPBLUP_MS preconditioned coefficient matrix CM, which resulted in effective condition numbers and convergence patterns of ssSNPBLUP_MS similar to those of ssGBLUP solved with the PCG method. The preconditioned deflated linear systems of ssSNPBLUP_MS and of ssSNPBLUP_Liu mixed model equations have the form:

M-1PCx=M-1Pb,

where P is a second-level preconditioner, called the deflation matrix, equal to P=I-CZdE-1Zd, with the matrix Zd being the deflation-subspace matrix as defined in Vandenplas et al. [4] and E=ZdCZd being the Galerkin matrix.

A second-level diagonal preconditioner

The DPCG method requires the computation and the storage of the Galerkin matrix E, which is computationally expensive for very large evaluations [4]. Furthermore, as implemented in Vandenplas et al. [4], each iteration of the DPCG method requires two multiplications of the coefficient matrix C by a vector, instead of one multiplication for the PCG method. Here, our aim is to develop another second-level preconditioner that decreases the largest eigenvalues of the preconditioned coefficient matrix CM at a lower cost than the DPCG method and results in smaller effective condition numbers and better convergence patterns.

To achieve this aim, we introduce a second-level diagonal preconditioner defined as:

D=kOIOO00kSISS=kOIOO00kSkOISS=kOD~

where IOO is an identity matrix of size equal to the number of equations that are not associated with SNP effects, ISS is an identity matrix of size equal to the number of equations that are associated with SNP effects, kO and kS are real positive numbers and D~=IOO00kSkOISS. Possible values for kO and kS are discussed below.

Therefore, the preconditioned system of Eq. (1) is modified as follows:

D-1M-1Cx=D-1M-1b. 2

Hereafter, we show that the proposed second-level preconditioner D applied to ssSNPBLUP systems of equations results in smaller effective condition numbers by decreasing the largest eigenvalues of the preconditioned coefficient matrices. For simplicity, the symmetric preconditioned coefficient matrix D-1/2M-1/2CM-1/2D-1/2=D-1/2C~D-1/2 with C~=M-1/2CM-1/2 is used instead of D-1M-1C. Indeed, these two matrices have the same spectrum, i.e., the same set of eigenvalues. In addition, the effective condition number of D-1/2C~D-1/2, κD-1/2C~D-1/2, is equal to the effective condition number of D~-1/2C~D~-1/2, κD~-1/2C~D~-1/2, because:

κD-1/2C~D-1/2=λmaxD-1/2C~D-1/2λminD-1/2C~D-1/2=λmaxkO-1D~-1/2C~D~-1/2λminkO-1D~-1/2C~D~-1/2=λmaxD~-1/2C~D~-1/2λminD~-1/2C~D~-1/2=κD~-1/2C~D~-1/2

with λminkO-1D~-1/2C~D~-1/2=kO-1λminD~-1/2C~D~-1/2, and λmaxkO-1D~-1/2C~D~-1/2=kO-1λmaxD~-1/2C~D~-1/2.

The result is that κD-1/2C~D-1/2 depends only on D~ and therefore only on the kO/kS ratio.

Regarding the largest eigenvalues of the preconditioned coefficient matrix D-1/2C~D-1/2 or equivalently of D-1M-1C, the effect of the second-level preconditioner D on λmaxD-1/2C~D-1/2 can be analysed using the Gershgorin circle theorem [18]. From this theorem, it follows that the largest eigenvalue of the preconditioned coefficient matrix D-1/2C~D-1/2 is bounded by, for all ith and jth equations:

λmaxD-1/2C~D-1/2maxiDii-1/2C~iiDii-1/2+ji|Dii-1/2C~ijDjj-1/2|. 3

Partitioned between the equations associated with SNP effects (S) and with the other effects (O), it follows from Eq. (3) that λmaxD-1/2C~D-1/2 has the following lower and upper bounds (see Additional file 1 for the derivation):

kO-1λmaxC~OOλmaxD-1/2C~D-1/2kO-1maxa,b, 4

with a=maxkC~OOkk+jk|C~OOkj|+kOkSjk|C~OSkj|,

b=max1kOkSC~SSll+kOkSjl|C~SSlj|+kOkSjl|C~SOlj| and k and l referring to the equations not associated with and associated with SNP effects, respectively.

Therefore, for a fixed value of kO, the upper bound of λmaxD-1/2C~D-1/2 will decrease with decreasing kO/kS ratios, up to the lowest upper bound kO-1maxkC~OOkk+jk|C~OOkj|, that is the upper bound of kO-1λmaxC~OO.

Nevertheless, decreasing the largest eigenvalue does not (necessarily) mean decreasing the effective condition number κD-1/2C~D-1/2, because λminD-1/2C~D-1/2 could decrease at the same rate as, or faster than λmaxD-1/2C~D-1/2 leading to constant or larger κD-1/2C~D-1/2. As such, it is required that λminD-1/2C~D-1/2 decreases at a lower rate, remains constant, or even increases, when λmaxD-1/2C~D-1/2 decreases with decreasing kO/kS ratios. This would be achieved if λminD-1/2C~D-1/2 is independent of kS. Hereafter, we formulate a sufficient condition to ensure that λminD-1/2C~D-1/2=kO-1λminC~ for any kO/kS ratio.

Let the matrix V~ be a matrix containing (columnwise) all the eigenvectors of C~ sorted according to the ascending order of their associated eigenvalues. The set of eigenvalues of C~ sorted according to their ascending order is hereafter called spectrum of C~. The matrix V~ can be partitioned into a matrix V~1 storing eigenvectors associated with eigenvalues at the left-hand side of the spectrum (that includes the smallest eigenvalues) of C~ and a matrix V~2 storing eigenvectors at the right-hand side of the spectrum (that includes the largest eigenvalues)of C~, and between equations associated with SNP effects or not, as follows:

V~=V~1V~2=V~O1V~O2V~S1V~S2.

A sufficient condition to ensure that λminD-1/2C~D-1/2=kO-1λminC~ is that V~S1=0, V~O2=0 and that all eigenvalues associated with an eigenvector of V~2 are equal to, or larger than kSkOλminC~ (see Additional file 2 for proof). Therefore, the effective condition numbers κD-1/2C~D-1/2 will decrease with decreasing kO/kS ratios until the largest eigenvalue λmaxD-1/2C~D-1/2 reaches its lower bound kO-1λmaxC~OO, as long as the sufficient condition is satisfied. In practice, the pattern of the matrix V~ will never be as required by the sufficient condition, because the submatrices C~OS and C~SO contain non-zero entries. However, this sufficient condition is helpful to formulate the expectation that convergence of the models will improve with decreasing kO/kS ratios up to a point that can either be identified from the analyses or by computing the eigenvalues of C~.

Analyses

Eigenvalues and eigenvectors of ssSNPBLUP_MS preconditioned coefficient matrices D-1/2C~D-1/2 with values of kS from 1 to 105 (and kO=1) were computed for the reduced dataset using the subroutine dsyev provided by Intel(R) Math Kernel Library (MKL) 11.3.2.

Using the matrix-free version of the software developed in Vandenplas et al. [4], the system of ssSNPBLUP_MS and ssSNPBLUP_Liu equations for the reduced and field datasets were solved with the PCG and DPCG methods together with the second-level preconditioner D for different values of kS (with kO=1). The second-level preconditioner D was implemented by combining it with the preconditioner M, as M~=DM. Accordingly, its implementation has no additional costs for an iteration of the PCG and DPCG methods. The DPCG method was applied with 5 SNP effects per subdomain [4]. To illustrate the effect of kO, the system of ssSNPBLUP_MS equations was also solved for the reduced dataset with the PCG method and different values of kO (with kS=1). For both the PCG and DPCG methods, the iterative process stopped when the relative residual norm was smaller than 10-6. For all systems, the smallest and largest eigenvalues that influence the convergence of the iterative methods were estimated using the Lanczos method based on information obtained from the (D)PCG method [16, 19, 20]. Effective condition numbers were computed from these estimates [17].

All real vectors and matrices were stored using double precision real numbers, except for the preconditioner, which was stored using single precision real numbers. All computations were performed on a computer with 528 GB and running RedHat 7.4 (x86_64) with an Intel Xeon E5-2667 (3.20 GHz) processor with 16 cores. The number of OpenMP threads was limited to 5 for both datasets. Time requirements are reported for the field dataset. All reported times are indicative, because they may have been influenced by other jobs running simultaneously on the computer.

Results

Reduced dataset

The spectra of the ssSNPBLUP_MS preconditioned coefficient matrices D-1/2C~D-1/2 solved with the PCG method and with kS values from 1 to 105 (and kO=1) are depicted in Fig. 1. It can be observed that the largest eigenvalues decreased with decreasing kO/kS ratios, up to kO/kS=10-2 (Fig. 1; Table 1). On the other side of the spectrum, a set of approximately 10,000 small eigenvalues that decrease with decreasing kO/kS ratios can be observed.

Fig. 1.

Fig. 1

Eigenvalues of different preconditioned coefficient matrices C~ for the reduced dataset. Eigenvalues of the preconditioned coefficient matrices of ssSNPBLUP_MS are depicted on a logarithmic scale. All eigenvalues less than 10-10 were set to 10-10. Eigenvalues are sorted in ascending order

Table 1.

Characteristics of preconditioned (deflated) coefficient matrices, and of PCG and DPCG methods for solving ssSNPBLUP applied to the reduced dataset

Modela Methodb kOc kSc kO/kS λmind λmaxd κe Nf
MS PCG 1 1 1 1.07×10-04 1.81×102 1.70×106 1499
MS PCG 1 2 0.5 1.07×10-04 9.11×101 8.55×105 1103
MS PCG 1 3.3 0.3 1.07×10-04 5.51×101 5.17×105 862
MS PCG 1 101 10-1 1.07×10-04 1.91×101 1.79×105 560
MS PCG 1 102 10-2 1.07×10-04 1.19×101 1.12×105 417
MS PCG 1 103 10-3 1.06×10-04 1.19×101 1.12×105 608
MS PCG 1 104 10-4 4.86×10-05 1.19×101 2.45×105 1254
MS PCG 1 105 10-5 4.87×10-06 1.19×101 2.45×106 2350
MS PCG 10-1 1 10-1 1.07×10-03 1.91×102 1.79×105 557
MS PCG 10-2 1 10-2 1.07×10-02 1.19×103 1.12×105 416
MS PCG 10-3 1 10-3 1.06×10-01 1.19×104 1.12×105 606
MS PCG 10-4 1 10-4 4.86×10-01 1.19×105 2.45×105 1254
MS PCG 10-5 1 10-5 4.86×10-01 1.19×106 2.45×106 2367
MS DPCG (1) 1 1 1 1.09×10-04 6.44 5.93×104 294
MS DPCG (1) 1 105 10-5 1.09×10-04 6.44 5.92×104 293
MS DPCG (5) 1 1 1 1.07×10-04 6.44 6.03×104 342
MS DPCG (5) 1 101 10-1 1.07×10-04 6.44 6.03×104 331
MS DPCG (5) 1 102 10-2 1.07×10-04 6.44 6.04×104 385
MS DPCG (5) 1 103 10-3 1.06×10-04 6.44 6.05×104 544
MS DPCG (5) 1 104 10-4 4.96×10-05 6.44 1.30×105 961
MS DPCG (5) 1 105 10-5 4.95×10-06 6.44 1.30×106 1456
Liu PCG 1 1 1 1.06×10-04 6.98×101 6.56×105 1401
Liu PCG 1 101 10-1 1.06×10-04 1.19×101 1.12×105 561
Liu PCG 1 102 10-2 1.06×10-04 1.19×101 1.12×105 563
Liu PCG 1 103 10-3 5.91×10-05 1.19×101 2.02×105 1154
Liu DPCG (5) 1 1 1 1.07×10-04 6.44 6.05×104 419
Liu DPCG (5) 1 101 10-1 1.07×10-04 6.44 6.05×104 399
Liu DPCG (5) 1 102 10-2 1.06×10-04 6.44 6.05×104 520
Liu DPCG (5) 1 103 10-3 6.02×10-05 6.44 1.07×105 1046

aMS = ssSNPBLUP model proposed by Mantysaari and Stranden [7]; Liu = ssSNPBLUP model proposed by Liu et al. [5]

bNumber of SNP effects per subdomain is within brackets

cParameters used for the second-level preconditioner D

dSmallest and largest eigenvalues of the preconditioned (deflated) coefficient matrix

eCondition number of the preconditioned (deflated) coefficient matrix

fNumber of iterations. A number of iterations equal to 10,000 means that the method failed to converge within 10,000 iterations

Figures 23 and 4 depict all the eigenvectors of the ssSNPBLUP_MS preconditioned coefficient matrices D-1/2C~D-1/2 with different values of kS (and kO=1). Non-zero eigenvector entries indicate an association of the eigenvalue (associated with this eigenvector) and the corresponding equations, while (almost) zero entries indicate no such (or a very weak) association. When kO/kS=1, it can be observed that the smallest eigenvalues of D-1/2C~D-1/2 are mainly associated with the equations that are not associated with SNP effects. On the other side, with kO/kS=1, the largest eigenvalues of D-1/2C~D-1/2 are mainly associated with the equations that are associated with SNP effects (Figs. 2 and 3). Decreasing kO/kS ratios resulted in modifying the associations of the extremal eigenvalues (i.e. the smallest and largest eigenvalues) with the equations. Indeed, decreasing kO/kS ratios resulted in the smallest eigenvalues of D-1/2C~D-1/2 mainly associated with the equations that are associated with SNP effects, and in the largest eigenvalues of D-1/2C~D-1/2 mainly associated with the equations that are not associated with SNP effects.

Fig. 2.

Fig. 2

Eigenvectors of preconditioned coefficient matrices with different ratios kO/kS for the reduced dataset. Reported values are aggregate absolute values of sets of 15 eigenvectors sorted following the ascending order of associated eigenvalues, and of 15 entries per eigenvector. Equations associated with SNP effects are from the 41,950th equation until the 51,944th equation

Fig. 3.

Fig. 3

Eigenvectors associated with the 750 smallest and largest eigenvalues of the preconditioned coefficient matrix with the ratio kO/kS=100 for the reduced dataset. Reported values are aggregate absolute values of sets of 15 eigenvectors sorted following the ascending order of associated eigenvalues, and of 15 entries per eigenvector. Darker colors correspond to higher values. Equations associated with SNP effects are from the 41,950th equation until the 51,944th equation

Fig. 4.

Fig. 4

Eigenvectors associated with the 750 smallest and largest eigenvalues of the preconditioned coefficient matrix with the ratio kO/kS=10-2 for the reduced dataset. Reported values are aggregate absolute values of sets of 15 eigenvectors sorted following the ascending order of associated eigenvalues, and of 15 entries per eigenvector. Darker colors correspond to higher values. Equations associated with SNP effects are from the 41,950th equation until the 51,944th equation

The extremal eigenvalues of the ssSNPBLUP_MS and ssSNPBLUP_Liu preconditioned (deflated) coefficient matrices, with various values for kO and kS, are in Table 1. For both ssSNPBLUP_MS and ssSNPBLUP_Liu solved with the PCG method, the largest eigenvalues of the preconditioned coefficient matrix decreased with decreasing kO/kS ratios to a lower value of 11.9 that was reached when kO/kS=10-2. In addition, for both models, the smallest eigenvalues remained constant with decreasing kO/kS ratios, until kO/kS=10-3 for ssSNPBLUP_MS and kO/kS=10-2 for ssSNPBLUP_Liu. Due to these results, the effective condition numbers and the number of iterations to reach convergence were the smallest for kO/kS=10-2 for ssSNPBLUP_MS and for kO/kS=10-1 for ssSNPBLUP_Liu (Table 1; Figs. 5 and 6). In comparison to the PCG method without the second-level preconditioner (i.e., with kO=kS=1), the number of iterations to reach convergence decreased by a factor of more than 3.5 for ssSNPBLUP_MS and by a factor of more than 2.4 for ssSNPBLUP_Liu. The minimum number of iterations to reach convergence with the PCG method was 417 for ssSNPBLUP_MS and 561 for ssSNPBLUP_Liu (Table 1; Figs. 5 and 6).

Fig. 5.

Fig. 5

Termination criteria for the reduced dataset for ssSNPBLUP_MS using the PCG and DPCG methods

Fig. 6.

Fig. 6

Termination criteria for the reduced dataset for ssSNPBLUP_Liu using the PCG and DPCG methods

For the same kO/kS ratio, the extremal eigenvalues (i.e. the smallest and largest eigenvalues) of the different preconditioned coefficient matrices were proportional by a factor of kO-1 (Table 1). Therefore, for the same kO/kS ratio the effective condition numbers of the different preconditioned coefficient matrices and the associated numbers of equations to reach convergence were the same (Table 1). It is also worth noting that, for a fixed value of kO, the largest eigenvalues decreased almost proportionally by a factor of kS-1 with decreasing kO/kS ratios until they reached their lower bound (Table 1).

For both ssSNPBLUP_MS and ssSNPBLUP_Liu solved with the DPCG method and 5 SNPs per subdomain, the largest eigenvalues of the preconditioned deflated coefficient matrices remained constant (around 6.44) for all kO/kS ratios (Table 1). However, for both models, the smallest eigenvalues started to decrease for kO/kS ratios smaller than 10-3 (10-2) for ssSNPBLUP_MS (ssSNPBLUP_Liu). These unfavourable decreases of the smaller eigenvalues with decreasing kO/kS ratios resulted in increasing the effective condition numbers and the number of iterations to reach convergence when the second-level preconditioner D was applied with the DPCG method (Table 1; Figs. 5 and 6).

Field dataset

For the field dataset, regarding the extremal eigenvalues, the application of the second-level preconditioner D together with the PCG method led to a decrease of the largest eigenvalues of the preconditioned coefficient matrix from 1.8×103 for ssSNPBLUP_MS, and from 1.4×102 for ssSNPBLUP_Liu, to about 5. Ratios of kO/kS smaller than 10-3 for ssSNPBLUP_MS and smaller than 10-2 for ssSNPBLUP_Liu did not further change the largest eigenvalues (Table 2). For the DPCG method applied to ssSNPBLUP_MS, the largest eigenvalues of the preconditioned deflated coefficient matrices remained constant for all kO/kS ratios (Table 2). For the DPCG method applied to ssSNPBLUP_Liu, the largest eigenvalues of the preconditioned deflated coefficient matrices slightly decreased with kO/kS=10-1 and then remained constant for all kO/kS ratios (Table 2). The application of the second-level preconditioner D with both the PCG and DPCG methods led to the smallest eigenvalues of the preconditioned (deflated) coefficient matrices decreasing with decreasing kO/kS ratios (Table 2).

Table 2.

Characteristics of preconditioned (deflated) coefficient matrices, and of PCG and DPCG methods for solving ssSNPBLUP applied to the field dataset

Modela Method kO/kSb λminc λmaxc κd Ne Iterativetimef Time/iter.g Totaltimeh
MS PCG 1 3.70×10-5 1.75×103 4.74×107 10,000 44,808 4.5 46,081
MS PCG 10-1 1.18×10-5 1.77×102 1.51×107 10,000 51,768 5.2 53,550
MS PCG 10-2 4.37×10-6 1.95×101 4.45×106 6210 34,139 5.5 35,812
MS PCG 10-3 3.99×10-6 5.08 1.27×106 3825 19,043 5.0 20,866
MS PCG 10-4 1.50×10-6 5.07 3.37×106 7336 54,326 7.4 56,475
MS DPCG 1 2.86×10-5 4.77 1.67×105 748 6527 8.7 17,229
MS DPCG 10-1 1.41×10-5 4.77 3.37×105 1211 11,864 9.8 22,947
MS DPCG 10-2 9.17×10-6 4.77 5.20×105 1778 17,030 9.6 28,615
MS DPCG 10-3 7.50×10-6 4.77 6.36×105 2569 23,676 9.2 35,497
Liu PCG 1 7.38×10-6 1.43×102 1.93×107 10,000 44,122 4.4 45,083
Liu PCG 10-1 3.66×10-6 1.52×101 4.14×106 6049 31,085 5.1 32,018
Liu PCG 10-2 4.29×10-6 5.07 1.18×106 2669 13,225 5.0 13,888
Liu PCG 10-3 3.51×10-6 5.07 1.44×106 3606 20,578 5.7 21,458
Liu PCG 10-4 1.69×10-6 5.07 3.00×106 7033 33,534 4.8 34,675
Liu DPCG 1 5.40×10-6 5.31 9.85×105 2877 22,791 7.9 26,521
Liu DPCG 10-1 6.91×10-6 4.77 6.90×105 1628 14,231 8.7 18,049
Liu DPCG 10-2 5.23×10-6 4.77 9.11×105 2234 23,244 10.4 28,057
Liu DPCG 10-3 4.31×10-6 4.77 1.11×106 3106 34,950 11.3 39,603

aMS = ssSNPBLUP model proposed by Mantysaari and Stranden [7]; Liu = ssSNPBLUP model proposed by Liu et al. [5];

bParameters used for the second-level preconditioner;

cSmallest and largest eigenvalues of the preconditioned (deflated) coefficient matrix;

dCondition number of the preconditioned (deflated) coefficient matrix;

eNumber of iterations. A number of iterations equal to 10,000 means that the method failed to converge within 10,000 iterations;

fWall clock time (seconds) for the iterative process;

gAverage wall clock time (seconds) per iteration;

hWall clock time (seconds) for a complete process (including I/O operations)

These observed patterns of extremal eigenvalues resulted in an optimal kO/kS=10-3 ratio for the PCG method applied to ssSNPBLUP_MS and an optimal kO/kS=10-2 ratio for the PCG method applied to ssSNPBLUP_Liu, in terms of effective condition numbers and numbers of iterations to reach convergence (Table 2; Figs. 7 and 8). With these ratios, the PCG method converged within 3825 iterations for ssSNPBLUP_MS and within 2665 iterations for ssSNPBLUP_Liu, while the PCG method did not converge within 10,000 iterations for both models (Table 2; Figs. 7 and 8). For the DPCG method, the application of the second-level preconditioner D generally deteriorated the effective condition numbers and numbers of iterations to reach convergence, for both ssSNPBLUP_MS and ssSNPBLUP_Liu. The DPCG method converged within 748 iterations for ssSNPBLUP_MS with kO/kS=1 and within 2877 iterations for ssSNPBLUP_Liu with kO/kS=10-1 (Table 2; Figs. 7 and 8).

Fig. 7.

Fig. 7

Termination criteria for the field dataset for ssSNPBLUP_MS using the PCG and DPCG methods

Fig. 8.

Fig. 8

Termination criteria for the field dataset for ssSNPBLUP_Liu using the PCG and DPCG methods

The total wall clock times of the iterative processes and for the complete processes (including I/O operations and computation of the preconditioners, and Galerkin matrices) for the PCG and DPCG methods are in Table 2. Across all combinations of systems of equations and solvers, the smallest wall clock time for the complete process was approximately 14,000 s for the PCG method with the second-level preconditioner D applied to ssSNPBLUP_Liu. Slightly greater wall clock times were needed for ssSNPBLUP_MS solved with the DPCG method (without the second-level preconditioner D). It is worth noting that the wall clock times needed for the computation of the inverse of the Galerkin matrix (E-1) were approximately 9700 s for ssSNPBLUP_MS and approximately 2500 s for ssSNPBLUP_Liu.

Discussion

In this study, we introduced a second-level diagonal preconditioner D that results in smaller effective condition numbers of the preconditioned (deflated) coefficient matrices and in improved convergence patterns for two different ssSNPBLUP mixed model equations. From the theory and based on the results, the use of the second-level preconditioner D results in improved effective condition numbers of the preconditioned (deflated) coefficient matrices of ssSNPBLUP by decreasing the largest eigenvalues, while the smallest eigenvalues remain constant, or decrease at a lower rate than the largest eigenvalues. In this section, we will discuss the following three points: (1) the influence of the second-level diagonal preconditioner D on the eigenvalues and associated eigenvectors of the preconditioned (deflated) coefficient matrices of ssSNPBLUP; (2) the application of the second-level preconditioner in ssSNPBLUP evaluations; and (3) the possible application of the second-level preconditioner D to more complex ssSNPBLUP models and to models other than ssSNPBLUP.

Influence of D on the eigenvalues and associated eigenvectors

Applying the second-level preconditioner D with an optimal kO/kS ratio to the linear systems of ssSNPBLUP results in a decrease of the largest eigenvalues of the preconditioned (deflated) coefficient matrices of ssSNPBLUP. As observed by Vandenplas et al. [4] and in comparison with ssGBLUP, the largest eigenvalues that influence the convergence of the PCG method applied to ssSNPBLUP_MS were associated with SNP effects. The second-level preconditioner D allows a decrease of these largest eigenvalues by multiplying all entries of these SNP equations of the preconditioned coefficient matrices by a value proportional to kO/kS, as shown with the Gershgorin circle algorithm [18] [see Eq. (4)]. However, if the kO/kS ratio is applied to a set of equations that are not associated with the largest eigenvalues of the preconditioned (deflated) coefficient matrices, the second-level preconditioner D will not result in decreased largest eigenvalues. This behaviour was observed when the second-level preconditioner D was applied to ssSNPBLUP_MS with the DPCG method for the reduced dataset (Table 1). For these scenarios, the DPCG method already annihilated all the largest unfavourable eigenvalues up to the lower bound of the largest eigenvalue that is allowed with the second-level preconditioner D. Therefore, the second-level preconditioner D did not further decrease the largest eigenvalues. It is worth noting that, if the DPCG method did not annihilate all the unfavourable largest eigenvalues up to the lower bound defined by Eq. (4), the application of the second-level preconditioner D with the DPCG method did remove these remaining largest eigenvalues, as shown by the results for ssSNPBLUP_Liu applied to the field dataset (Table 2).

The decrease of the largest eigenvalues of the preconditioned coefficient matrices with decreasing kO/kS ratios (and until the lower bound is reached) can be explained by the sparsity pattern of the eigenvectors associated with the largest eigenvalues of the preconditioned coefficient matrices C~ of ssSNPBLUP. Indeed, Figs. 2 and 3 show that the entries that correspond to the equations that are not associated with SNP effects, are close to 0 for the eigenvectors associated with the largest eigenvalues of C~ of ssSNPBLUP_MS. Accordingly, if we assume that these entries are 0, i.e.,

v~max=v~Omaxv~Smax=0v~Smax

being an eigenvector associated with one of largest eigenvalues of C~, it follows that the largest eigenvalues of C~ multiplied by kS-1 are also the eigenvalues of D-1/2C~D-1/2. These largest eigenvalues of C~ will therefore be equal to the largest eigenvalues of D-1/2C~D-1/2 until the lower bound defined by Eq. (4) is reached (see Additional file 2 for the derivation). This observation can also motivate an educated guess for an optimal kO/kS ratio for ssSNPBLUP with one additive genetic effect. If the largest eigenvalues λmaxC~ and λmaxC~OO are (approximately) known, an educated guess for the kO/kS ratio can be equal to kOkS=kOλmaxC~OOλmaxC~. For example, in our cases, λmaxC~OO was always equal to the largest eigenvalue of the preconditioned coefficient matrix of a pedigree BLUP (results not shown). It follows that the educated guess for the field dataset is equal to 3.0×10-3 for ssSNPBLUP_MS and 3.5×10-2 for ssSNPBLUP_Liu, since λmaxC~OO=5.07. Both values are of the same order as the corresponding optimal kO/kS ratios. However, the second-level preconditioner D will be effective only if the smallest eigenvalues of the preconditioned coefficient matrices are not influenced, or at least less influenced than the largest eigenvalues, by the second-level preconditioner D.

The decrease of the smallest eigenvalues of the preconditioned (deflated) coefficient matrices mainly depends on the sparsity pattern of the eigenvectors associated with the smallest eigenvalues. We formulated a sufficient condition such that the smallest eigenvalues remain constant when the second-level preconditioner is applied. While this sufficient condition is not fulfilled for the reduced dataset (and probably also not for the field dataset), it can help us to predict the behaviour of the smallest eigenvalues based on the sparsity pattern of the associated eigenvectors. For example, if the eigenvector associated with the smallest eigenvalue of C~ has mainly non-zero entries corresponding to the equations associated with SNP effects, the use of the second-level preconditioner D will most likely result in a decrease of the smallest eigenvalues proportional to kS-1, which is undesirable. Other behaviours of the smallest eigenvalues of the preconditioned (deflated) coefficient matrices can lead to the conclusion that the associated eigenvectors have a different sparsity pattern, which helps understand if and how the use of the proposed second-level diagonal preconditioner will be beneficial.

Application of D in ssSNPBLUP evaluations

The second-level preconditioner D is easy to implement in existing software and does not influence the computational costs of a PCG iteration, since it can be merged with the preconditioner M. Indeed, it is sufficient to multiply the entries of M-1 that correspond to the equations associated with SNP effects by an optimal kO/kS ratio to implement the second-level preconditioner D. Furthermore, the value of an optimal kO/kS ratio for a ssSNPBLUP evaluation can be determined by testing a range of values around the educated guess defined previously and then re-used for several subsequent ssSNPBLUP evaluations, because additional data for each new evaluation is only a fraction of the data previously used and will therefore not modify, or will modify only slightly, the properties of the preconditioned coefficient matrices C~.

In this study, we used the second-level diagonal preconditioner D for two different ssSNPBLUP models. To our knowledge, it is the first time that ssSNPBLUP_Liu was successfully applied until convergence with real datasets [3, 5]. From our results, it seems that the preconditioned coefficient matrices of ssSNPBLUP_Liu are better conditioned than the preconditioned coefficient matrices of ssSNPBLUP_MS, leading to better convergence patterns for ssSNPBLUP_Liu. Therefore, among all possible combinations of linear systems (i.e., ssSNPBLUP_MS and ssSNPBLUP_Liu), solvers (i.e., the PCG and DPCG methods) and the application (or not) of the second-level preconditioner D, it seems that ssSNPBLUP_Liu solved with the PCG method combined with the second-level preconditioner D is the most efficient in terms of total wall clock times and implementation. However, in our study it was tested only on two datasets and the most efficient combination of linear system and solver will most likely be situation-dependent.

Application of D to other scenarios

The proposed second-level preconditioner D can be applied and may be beneficial for ssSNPBLUP models that involve multiple additive genetic effects, or for other models that include an effect that would result in an increase to the largest eigenvalues of the preconditioned coefficient matrices. The developed theory does not require a multivariate ssSNPBLUP with only one additive genetic effect. As such, for example, if multiple additive genetic effects are fitted into the ssSNPBLUP model, such as direct and maternal genetic effects, the second-level preconditioner D could be used with different kO/kS ratios applied separately to the direct and maternal SNP effects. A similar strategy was successfully applied for ssSNPBLUP proposed by Fernando et al. [2] with French beef cattle datasets (Thierry Tribout, personal communication). Furthermore, the second-level preconditioner D could be used to improve the convergence pattern of models other than ssSNPBLUP. For example, with the field dataset, the addition of the genetic groups fitted explicitly as random covariables in the model for pedigree-BLUP (that is, without genomic information) led to an increase of the largest eigenvalue of the preconditioned coefficient matrix from 5.1 to 14.8. The introduction of the second-level preconditioner D into the preconditioned linear system of pedigree-BLUP with a kO/kS=10-1 ratio applied to the equations associated with the genetic groups reduced the largest eigenvalues to 6.0, resulting in a decrease of the effective condition number by a factor of 2.6. This decrease of the effective condition number translated to a decrease in the number of iterations to reach convergence from 843 to 660.

Conclusions

The proposed second-level preconditioner D is easy to implement in existing software and can improve the convergence of the PCG and DPCG methods applied to different ssSNPBLUP methods. Based on our results, the ssSNPBLUP system of equations proposed by Liu et al. [5] solved using the PCG method and the second-level preconditioner seems to be most efficient. However, the optimal combination of ssSNPBLUP and solver will most likely be situation-dependent.

Additional files

12711_2019_472_MOESM1_ESM.pdf (103.1KB, pdf)

Additional file 1. Bounds of the largest eigenvalue of the preconditioned coefficient matrix of ssSNPBLUP Derivation of the lower and upper bounds of the largest eigenvalue of the preconditioned coefficient matrix of ssSNPBLUP.

12711_2019_472_MOESM2_ESM.pdf (109.1KB, pdf)

Additional file 2. Proof of the sufficient condition Proof of the sufficient condition.

Acknowlegements

The use of the high-performance cluster was made possible by CAT-AgroFood (Shared Research Facilities Wageningen UR, Wageningen, the Netherlands).

Authors' contributions

JV conceived the study design, ran the tests, and wrote the programs and the first draft. JV and CV discussed and developed the theory. HE prepared data. CV and MPLC provided valuable insights throughout the writing process. All authors read and approved the final manuscript.

Funding

This study was financially supported by the Dutch Ministry of Economic Affairs (TKI Agri & Food Project 16022) and the Breed4Food partners Cobb Europe (Colchester, Essex, United Kingdom), CRV (Arnhem, the Netherlands), Hendrix Genetics (Boxmeer, the Netherlands), and Topigs Norsvin (Helvoirt, the Netherlands).

Ethics approval and consent to participate

The data used for this study were collected as part of routine data recording for a commercial breeding program. Samples collected for DNA extraction were only used for the breeding program. Data recording and sample collection were conducted strictly in line with the Dutch law on the protection of animals (Gezondheids- en welzijnswet voor dieren).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Jeremie Vandenplas, Email: jeremie.vandenplas@wur.nl.

Mario P. L. Calus, Email: mario.calus@wur.nl

Herwin Eding, Email: herwin.eding@crv4all.com.

Cornelis Vuik, Email: c.vuik@tudelft.nl.

References

  • 1.Strandén I, Lidauer M. Solving large mixed linear models using preconditioned conjugate gradient iteration. J Dairy Sci. 1999;82:2779–2787. doi: 10.3168/jds.S0022-0302(99)75535-9. [DOI] [PubMed] [Google Scholar]
  • 2.Fernando RL, Cheng H, Garrick DJ. An efficient exact method to obtain GBLUP and single-step GBLUP when the genomic relationship matrix is singular. Genet Sel Evol. 2016;48:80. doi: 10.1186/s12711-016-0260-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Taskinen M, Mäntysaari EA, Strandén I. Single-step SNP-BLUP with on-the-fly imputed genotypes and residual polygenic effects. Genet Sel Evol. 2017;49:36. doi: 10.1186/s12711-017-0310-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Vandenplas J, Eding H, Calus MPL, Vuik C. Deflated preconditioned conjugate gradient method for solving single-step BLUP models efficiently. Genet Sel Evol. 2018;50:51. doi: 10.1186/s12711-018-0429-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Liu Z, Goddard M, Reinhardt F, Reents R. A single-step genomic model with direct estimation of marker effects. J Dairy Sci. 2014;97:5833–5850. doi: 10.3168/jds.2014-7924. [DOI] [PubMed] [Google Scholar]
  • 6.Legarra A, Ducrocq V. Computational strategies for national integration of phenotypic, genomic, and pedigree data in a single-step best linear unbiased prediction. J Dairy Sci. 2012;95:4629–4645. doi: 10.3168/jds.2011-4982. [DOI] [PubMed] [Google Scholar]
  • 7.Mäntysaari EA, Strandén I. Single-step genomic evaluation with many more genotyped animals. In: Proceedings of the 67th annual meeting of the European Association for Animal Production, 29 August–2 September 2016, Belfast; 2016.
  • 8.Cornelissen MAMC, Mullaart E, Van der Linde C, Mulder HA. Estimating variance components and breeding values for number of oocytes and number of embryos in dairy cattle using a single-step genomic evaluation. J Dairy Sci. 2017;100:4698–4705. doi: 10.3168/jds.2016-12075. [DOI] [PubMed] [Google Scholar]
  • 9.CRV Animal Evaluation Unit. Management guides, E16: breeding value-temperament during milking; 2010. https://www.crv4all-international.com/wp-content/uploads/2016/03/E-16-Temperament.pdf. Accessed 15 Mar 2018.
  • 10.CRV Animal Evaluation Unit. Statistical indicators, E-15: breeding value milking speed; 2017. https://www.crv4all-international.com/wp-content/uploads/2017/05/E_15_msn_apr-2017_EN.pdf. Accessed 15 Mar 2018.
  • 11.Gengler N, Nieuwhof G, Konstantinov K, Goddard ME. Alternative single-step type genomic prediction equations. In: Proceedings of the 63rd annual meeting of the European Association for Animal Production, 27–31 Aug 2012; Bratislava; 2012.
  • 12.Mäntysaari EA, Evans RD, Strandén I. Efficient single-step genomic evaluation for a multibreed beef cattle population having many genotyped animals. J Anim Sci. 2017;95:4728–4737. doi: 10.2527/jas2017.1912. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Fragomeni BO, Lourenco DAL, Masuda Y, Legarra A, Misztal I. Incorporation of causative quantitative trait nucleotides in single-step GBLUP. Genet Sel Evol. 2017;49:59. doi: 10.1186/s12711-017-0335-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Raymond B, Bouwman AC, Wientjes YCJ, Schrooten C, Houwing-Duistermaat J, Veerkamp RF. Genomic prediction for numerically small breeds, using models with pre-selected and differentially weighted markers. Genet Sel Evol. 2018;50:49. doi: 10.1186/s12711-018-0419-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Wang H, Misztal I, Aguilar I, Legarra A, Muir WM. Genome-wide association mapping including phenotypes from relatives without genotypes. Genet Res. 2012;94:73–83. doi: 10.1017/S0016672312000274. [DOI] [PubMed] [Google Scholar]
  • 16.Saad Y. Iterative methods for sparse linear systems. 2. Philadelphia: Society for Industrial and Applied Mathematics; 2003. [Google Scholar]
  • 17.Frank J, Vuik C. On the construction of deflation-based preconditioners. SIAM J Sci Comput. 2001;23:442–462. doi: 10.1137/S1064827500373231. [DOI] [Google Scholar]
  • 18.Varga RS. Geršgorin and his circles. Springer series in computational mathematics. Berlin: Springer; 2004. [Google Scholar]
  • 19.Paige C, Saunders M. Solution of sparse indefinite systems of linear equations. SIAM J Numer Anal. 1975;12:617–629. doi: 10.1137/0712047. [DOI] [Google Scholar]
  • 20.Kaasschieter EF. A practical termination criterion for the conjugate gradient method. BIT Numer Math. 1988;28:308–322. doi: 10.1007/BF01934094. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

12711_2019_472_MOESM1_ESM.pdf (103.1KB, pdf)

Additional file 1. Bounds of the largest eigenvalue of the preconditioned coefficient matrix of ssSNPBLUP Derivation of the lower and upper bounds of the largest eigenvalue of the preconditioned coefficient matrix of ssSNPBLUP.

12711_2019_472_MOESM2_ESM.pdf (109.1KB, pdf)

Additional file 2. Proof of the sufficient condition Proof of the sufficient condition.


Articles from Genetics, Selection, Evolution : GSE are provided here courtesy of BMC

RESOURCES