Abstract
This paper considers estimation and prediction of a high-dimensional linear regression in the setting of transfer learning where, in addition to observations from the target model, auxiliary samples from different but possibly related regression models are available. When the set of informative auxiliary studies is known, an estimator and a predictor are proposed and their optimality is established. The optimal rates of convergence for prediction and estimation are faster than the corresponding rates without using the auxiliary samples. This implies that knowledge from the informative auxiliary samples can be transferred to improve the learning performance of the target problem. When the set of informative auxiliary samples is unknown, we propose a data-driven procedure for transfer learning, called Trans-Lasso, and show its robustness to non-informative auxiliary samples and its efficiency in knowledge transfer. The proposed procedures are demonstrated in numerical studies and are applied to a dataset concerning the associations among gene expressions. It is shown that Trans-Lasso leads to improved performance in gene expression prediction in a target tissue by incorporating data from multiple different tissues as auxiliary samples.
1. Introduction
Modern scientific research is characterized by massive and diverse data sets. It is of significant interest to integrate different data sets to make a more accurate prediction and statistical inference. Given a target problem to solve, transfer learning (Torrey and Shavlik, 2010) aims at transferring the knowledge from different but related samples to improve the learning performance of the target problem. A typical example of transfer learning is that one can improve the accuracy of recognizing cars by using not only the labeled data for cars but some labeled data for trucks (Weiss et al., 2016). Besides classification, another important transfer learning problem is linear regressions with auxiliary samples. In biomedical studies, some clinical or biological outcomes are hard to obtain due to ethical or cost issues, in which case transfer learning can be leveraged to boost the prediction and estimation performance by effectively utilizing information from related studies.
Transfer learning has been applied to problems in medical and biological studies, including predictions of protein localization (Mei et al., 2011), biological imaging diagnosis (Shin et al., 2016), drug sensitivity prediction (Turki et al., 2017), and integrative analysis of “multi-omics” data, see, for instance, Sun and Hu (2016), Hu et al. (2019), and Wang et al. (2019). It has also been applied to natural language processing (Daumé III, 2007) and recommendation systems (Pan and Yang, 2013) in machine learning. The application that motivated the present paper is the integration of the gene expression measurements in different issues for understanding the gene regulations using the Genotype-Tissue Expression (GTEx) data (https://gtexportal.org/). These datasets are always high-dimensional with relatively small sample sizes. When studying the gene regulation relationships of a specific tissue or cell-type, it is possible to incorporate information from other tissues to enhance the learning accuracy. This motivates us to consider transfer learning in high-dimensional linear regression.
1.1. Transfer Learning in High-dimensional Linear Regression
Regression analysis is one of the most widely used statistical methods to understand the association of an outcome with a set of covariates. In many modern applications, the dimension of the covariates is usually very high as compared to the sample size. Typical examples include genome-wide association and gene expression studies. In this paper, we consider transfer learning in high-dimensional linear models. Formally, the target model can be written as
| (1) |
where , i = 1, . . . , n0, are independent samples, is the coefficient vector of interest, and , i = 1, . . . , n0 are independently distributed random noises with . In the high-dimensional regime, where p can be larger and much larger than n0, β is often assumed to be sparse such that the number of nonzero elements of β, denoted by s, is much smaller than p.
In the context of transfer learning, we observe additional samples from K auxiliary studies, That is, we observe generated from the auxiliary model
| (2) |
where is the regression vector for the k-th study, and is the random noise such that . The regression coefficients w(k) are unknown and different from our target β in general. The number of auxiliary studies, K, is allowed to grow but practically K may not be too large. We will study the estimation and prediction of target model (1) utilizing the primary data , i = 1, . . . , n0, as well as the data from K auxiliary studies , i = 1, . . . , nk, k = 1, . . . ,K.
If an auxiliary model is “similar” to the target model, we say that this auxiliary sample/study is informative. In this work, we characterize the informative level of the k-th auxiliary study using the sparsity of the difference between w(k) and β. Let δ(k) = β − w(k) denote the contrast between w(k) and β. The set of informative auxiliary samples are those whose contrasts are sufficiently sparse:
| (3) |
for some q ∈ [0, 1]. The set contains the auxiliary studies whose contrast vectors have ℓq-sparsity at most h and is called the informative set. It will be seen later that as long as h is relatively small compared to the sparsity of β, the studies in can be useful in improving the prediction and estimation of β. In the case of q = 0, the set corresponds to the auxiliary samples whose contrast vectors have at most h nonzero elements. We also consider approximate sparsity constraints (q ∈ (0, 1]), which allows all of the coefficients to be nonzero but their magnitude decays at a relatively rapid rate. For any q ∈ [0, 1], smaller h implies that the auxiliary samples in are more informative; larger cardinality of implies that a larger number of informative auxiliary samples. Therefore, smaller h and larger should be favorable. We allow to be empty in which case none of the auxiliary samples is informative. For the auxiliary samples outside of , we do not assume sparse δ(k) and hence w(k) can be very different from β for .
In polygenic risk score (PRS) prediction and gene-expression partial-correlation analysis, this similarity characterization of two different high dimensional regression models is motivated by commonly adopted assumptions. In PRS prediction, for example, high-dimensional sparse regression models are commonly assumed (Mak et al., 2017). In addition, it has been observed that many complex traits have a shared genetic etiology, including various autoimmune diseases (Li et al., 2015; Zhernakova et al., 2009) and psychiatric disorders (Lee et al., 2013; Cross-Disorder Group of the Psychiatric Genomics Consortium, 2019). The similarity characterization we proposed captures the sparse nature of genome-wide association data and shared genetic etiology of multiple genetically related traits. In the gene expression data analysis, one is interested in understanding how a set of genes regulate another gene based on data measured in different tissues. Such an analysis provides useful insights into gene regulatory networks, which are often sparse. In addition, many tissues have shared regulatory relationships among the genes (Pierson et al., 2015; Fagny et al., 2017). In such applications, we also expect sparse and similar regression coefficients for the models assumed for different tissues.
There is a paucity of methods and fundamental theoretical results for high-dimensional linear regression in the transfer learning setting. In the case where the set of informative auxiliary samples is known, there is a lack of rate optimal estimation and prediction methods. A closely related topic is multi-task learning (Ando and Zhang, 2005; Lounici et al., 2009; Agarwal et al., 2012), where the goal is to estimate multiple models simultaneously. The multi-task learning considered in Lounici et al. (2009) estimates multiple high-dimensional sparse linear models under the assumption that the supports of all the regression coefficients are the same. In multi-task learning, different regularization formats have been considered to model the similarity among different studies (Chen et al., 2010; Danaher et al., 2014; Dondelinger et al., 2020).
The goal of transfer learning is however different, as one is only interested in estimating the target model and this remains to be a largely unsolved problem. Cai and Wei (2021) studied the minimax and adaptive methods for nonparametric classification in the transfer learning setting under the assumption that all the auxiliary samples are similar to the target distribution (Cai and Wei, 2021, Definition 5). In a more challenging setting where the set is unknown as is typical in real applications, it is unclear how to avoid the effects of adversarial auxiliary samples. Bastani (2020) studied estimation and prediction in high-dimensional linear models with one informative auxiliary study and q = 1, where the sample size of the auxiliary study is larger than the number of covariates. The current work considers more general scenarios under weaker assumptions. Specifically, the sample size of auxiliary samples can be smaller than the number of covariates and some auxiliary studies can be non-informative, which is more practical in applications. Additional challenges include the heterogeneity among the design matrices, which does not arise in the conventional high-dimensional regression problems and hence requires novel proposals.
The problem we study here is certainly related to the high-dimensional prediction and estimation in the conventional settings where only samples from the target model are available. Several penalized or constrained minimization methods have been proposed for prediction and estimation for high-dimensional linear regression; see, for example, Tibshirani (1996); Fan and Li (2001); Zou (2006); Candes and Tao (2007); Zhang (2010). The minimax optimal rates for estimation and prediction are studied in Raskutti et al. (2011) and Verzelen (2012).
1.2. Our Contributions
In the setting where the informative set is known, we propose a transfer learning algorithm, called Oracle Trans-Lasso, for estimation of the target regression vector and prediction and prove its minimax optimality under mild conditions. The results demonstrate a faster rate of convergence when is non-empty and h is sufficiently smaller than s, in which case the knowledge from the informative auxiliary samples can be optimally transferred to substantially improve estimation and prediction of the regression problem under the target model.
In the more challenging setting where is unknown a priori, we introduce a data-driven algorithm, called Trans-Lasso, to adapt to the unknown . The adaption is achieved by aggregating a number of candidate estimators. The desirable properties of the aggregation methods guarantee that the Trans-Lasso does not perform much worse than the best one among the candidate estimators. We construct the candidate estimators and demonstrate the robustness and the efficiency of Trans-Lasso under mild conditions. In terms of robustness, the Trans-Lasso is guaranteed to be not much worse than the Lasso estimator using only the primary samples no matter how adversarial the auxiliary samples are. In terms of efficiency, the knowledge from a subset of the informative auxiliary samples can be transferred to the target problem under proper conditions. Furthermore, If the contrast vectors in the informative samples are sufficiently sparse, the Trans-Lasso estimator performs as if the informative set is known.
When the distributions of the design matrices are distinct in different samples, the effect of heterogeneous designs in transfer learning is studied. The performance of the proposed algorithm is investigated theoretically and numerically in various settings.
1.3. Organization and Notation
The rest of this paper is organized as follows. Section 2 focuses on the setting where the informative set is known and with the sparsity in (3) measured in ℓ1-norm. A transfer learning algorithm is proposed for estimation and prediction of the target parameter and its minimax optimality is established. In Section 3, we study the estimation and prediction of the target model when is unknown for q = 1. In Section 4, we justify the theoretical performance of our proposals under heterogeneous designs. In Section 5, the numerical performance of the proposed methods is studied in various settings. In Section 6, the proposed algorithms are applied to the GTEx data to investigate the association of one gene with other genes in a target tissue by leveraging data measured on other related tissues or cell types. The proofs and results for ℓq-sparse contrasts with q ∈ [0, 1) are provided in the supplementary materials (Li et al., 2020).
We finish this section with notation. Let and denote the design matrix and the response vector for the primary data, respectively. Let and denote the design matrix and the response vector for the k-th auxiliary data, respectively. For a class of matrices , , we use to denote Rl, . Let . For a generic semi-positive definite matrix , let Λmax(Σ) and Λmin(Σ) denote the largest and smallest eigenvalues of Σ, respectively. Let Tr(Σ) denote the trace of Σ. Let ej be a vector such that its j-th element is 1 and all other elements are zero. Let a∨b denote max{a, b} and a∧b denote min{a, b}. We use c, c0, c1, . . . to denote generic constants which can be different in different statements. Let an = O(bn) and an ≲. bn denote |an/bn| ≤ c for some constant c when n is large enough. Let an ≍ bn denote |an/bn| → c for some constant c as n → ∞. Let an = OP (bn) and denote for some constant c < ∞. Let an = oP (bn) denote for any constant c > 0.
2. Estimation with Known Informative Auxiliary Samples
We consider in this section transfer learning for high-dimensional linear regression when the informative set is known. The focus is on the ℓ1-sparse characterization of the contrast vectors. The notation will be abbreviated as in the sequel without special emphasis. Section C in the supplementary materials generalizes the sparse contrasts from ℓ1-constraint to ℓq-constraint for q ∈ [0, 1) and presents a rate-optimal estimator in this setting.
2.1. Oracle Trans-Lasso Algorithm
We propose a transfer learning algorithm, called Oracle Trans-Lasso, for estimation and prediction when is known. As an overview, we first compute an initial estimator using all the informative auxiliary samples. However, its probabilistic limit is biased from β as w(k) ≠ β in general. We then correct its bias using the primary data in the second step. Algorithm 1 formally presents our proposed Oracle Trans-Lasso algorithm.
Algorithm 1:
Oracle Trans-Lasso algorithm
| Input : Primary data (X(0), y(0)) and informative auxiliary samples | ||
| Output: | ||
| Step 1. Compute | ||
| ||
| for with some constant c1. | ||
| Step 2. Let | ||
| ||
| where | ||
| ||
| for with some constant c2. |
In Step 1, is realized based on the Lasso (Tibshirani, 1996) using all the informative auxiliary samples. Its probabilistic limit is , which can be defined via the following moment condition
Denoting , has the following explicit form:
| (7) |
for and given that Σ(k) = Σ(0) for all . That is, the probabilistic limit of , , has bias , which is a weighted average of δ(k). Step 1 is related to the approach for high-dimensional misspecified models (Bühlmann and van de Geer, 2015) and moment estimators. The estimator converges relatively fast as the sample size used in Step 1 is relatively large. Step 2 corrects the bias, , using the primary samples. In fact, is a sparse high-dimensional vector whose ℓ1-norm is no larger than h. Hence, the error of step 2 is under control for a relatively small h. The choice of the tuning parameters λw and λδ will be further specified in Theorem 1.
We compare the proposed Oracle Trans-Lasso method to the multi-task regression methods, say Section 3.4.3 of Agarwal et al. (2012) and Danaher et al. (2014). The Oracle Trans-Lasso does not penalize the differences among the regression coefficients in the auxiliary studies. This is again because the focus of transfer learning is only the target study. Theoretically, extra penalization terms and the joint analysis of multiple estimators may not help improve the estimation accuracy of the parameter of interest.
2.2. Theoretical Properties of Oracle Trans-Lasso
Formally, the parameter space we consider can be written as
| (8) |
for and q ∈ [0, 1]. We study the rate of convergence for the Oracle Trans-Lasso algorithm under the following two conditions.
Condition 1.
For each , each row of X(k) is i.i.d. Gaussian distributed with mean zero and covariance matrix Σ. The smallest and largest eigenvalues of Σ are bounded away from zero and infinity, respectively.
Condition 2.
For each , is finite and the random noises are i.i.d. sub-Gaussian with mean zero and variance . For some constant C0, it holds that for all .
Condition 1 assumes Gaussian designs, which provides convenience for bounding the restricted eigenvalues of sample covariance matrices. Moreover, the designs are identically distributed for . This assumption simplifies some technical conditions and will be relaxed in Section 4. We mention that the conditions on the eigenvalues of Σ can be replaced with some eigenvalue conditions restricted to a convex cone. Condition 2 assumes sub-Gaussian random noises for primary and informative auxiliary samples and the second moment of the response vector is finite. Conditions 1 and 2 make no assumptions on the non-informative auxiliary samples as they are not used in the Oracle Trans-Lasso algorithm. In the next theorem, we prove the convergence rate of the Oracle Trans-Lasso. Let .
Theorem 1 (Convergence Rate of Oracle Trans-Lasso).
Assume that Condition 1 and Condition 2 hold true. Suppose that is known with and . We take and for some sufficiently large constants c1 and c2. If , then there exists some positive constant c1 such that
| (9) |
where B = {β,w(1), . . . ,w(k)} denotes all the unknown parameters. Theorem 1 provides the convergence rate of for any true parameters in Θ1(s, h) when an informative set is known. We illustrate Theorem 1 by contrasting to the estimation results of the Lasso. First, the results of Theorem 1 hold under a weaker condition on s, i.e., when , while s log p = o(n0) is always assumed in the single-task regression. Hence, the Oracle Trans-Lasso can deal with more challenging scenarios with less sparse target parameter. Second, the right-hand side of (9) is sharper than the convergence rate of Lasso, s log p/n0, if and . That is, if the informative auxiliary samples have contrast vectors sufficiently sparser than β and the total sample size is significantly larger than the primary sample size, then the knowledge from the auxiliary samples can significantly improve the learning performance of the target model. The condition for improvement, , allows a wide range of h. For example, the typical regime for single-task regression is s log p/n0 = O(1) and it implies that can be as large as . Hence, the condition for improvement of Theorem 1 allows h to be as large as . Larger the s, weaker the condition for improvement.
The sample size requirement in Theorem 1 guarantees the lower restricted eigenvalues of the sample covariance matrices in use are bounded away from zero with high probability. The proof of Theorem 1 involves an error analysis of and that of . While may be neither ℓ0-nor ℓ1-sparse, it can be decomposed into an ℓ0-sparse component plus an ℓ1-sparse component as illustrated in (7). Exploiting this sparse structure is a key step in proving Theorem 1. Regarding the choice of tuning parameters, λw depends on the second moment of , which can be consistently estimated by . The other tuning parameter λδ depends on the noise levels, which can be estimated by the scaled Lasso (Sun and Zhang, 2012). In practice, cross validation can be performed for selecting tuning parameters.
We now establish the minimax lower bound for estimating β in the transfer learning setup, which shows the minimax optimality of the Oracle Trans-Lasso algorithm in Θ1(s, h).
Theorem 2 (Minimax lower bound for q = 1).
Assume Condition 1 and Condition 2. If , then
for some positive constants c1 and c2.
Theorem 2 implies that obtained by the Oracle Trans-Lasso algorithm is minimax rate optimal in Θ1(s, h) under the conditions of Theorem 1. To understand the lower bound, the term is the optimal convergence rate when w(k) = β for all . This is an extremely ideal case where we have i.i.d. samples from the target model. The second term in the lower bound is the optimal convergence rate when w(k) = 0 for all , i.e., the auxiliary samples are not helpful at all. Let denote the ℓq-ball with radius r centered at zero. In this case, the definition of Θ1(s, h) implies that and the second term in the lower bound is indeed the minimax optimal rate for estimation when with n0 i.i.d. samples (Tsybakov, 2014).
3. Unknown Set of Informative Auxiliary Samples
The Oracle Trans-Lasso algorithm is based on the knowledge of the informative set . In some applications, the informative set is not given, which makes the transfer learning problem more challenging. In this section, we propose a data-driven method for estimation and prediction when is unknown. The proposed algorithm is described in detail in Sections 3.1 and 3.2. Its theoretical properties are studied in Section 3.3.
3.1. The Trans-Lasso Algorithm
Our proposed algorithm, called Trans-Lasso, consists of two main steps. First, we construct a collection of candidate estimators, each of which is based on an estimate of . Second, we perform an aggregation step (Rigollet and Tsybakov, 2011; Dai et al., 2012, 2018) on these candidate estimators. Under proper conditions, the aggregated estimator is guaranteed to be not much worse than the best candidate estimator under consideration in terms of prediction. For technical reasons, we need the candidate estimators and the samples for aggregation to be independent. Hence, we start with sample splitting. We need some more notation. For a generic estimate of β, b, denote its sum of squared prediction error as
where is a subset of {1, . . . , n0}. Let denote an L-dimensional simplex. The Trans-Lasso algorithm is presented in Algorithm 2.
As an illustration, steps 2 and 3 of the Trans-Lasso algorithm construct some initial estimates of β, . They are computed using the Oracle Trans-Lasso algorithm by treating each as the set of informative auxiliary samples. We construct to be some estimates of using the procedure provided in Section 3.2. Step 4 is based on the Q-aggregation proposed in Dai et al. (2012) with a uniform prior, a Kullback–Leibler penalty, and a simplified tuning parameter. The Q-aggregation can be viewed as a weighted version of least square aggregation and exponential aggregation (Rigollet and Tsybakov, 2011) and it has been shown to be rate optimal both in expectation and with high probability for model selection aggregation problems.
Algorithm 2:
Trans-Lasso Algorithm
| Input : Primary data (X(0), y(0)) and samples from K auxiliary studies . | ||
| Output: | ||
| Step 1. Let be a random subset of {1, . . ., n0} such that with some constant 0 < c0 < 1. Let . | ||
| Step 2. Construct L + 1 candidate sets of f , such that and are based on (14) using and . | ||
| Step 3. For each 0 ≤ l ≤ L, run the Oracle Trans-Lasso algorithm with primary sample and auxiliary samples and auxiliary samples . Denote the output as for 0 ≤ l ≤ L. | ||
| Step 4. Compute | ||
| ||
| for some λθ > 0. Output | ||
|
Model selection aggregation is an effective method for the transfer learning task under consideration. On one hand, it guarantees the robustness of Trans-Lasso in the following sense. Notice that corresponds to the single-task Lasso estimator and it is always included in our dictionary. The purpose is that, invoking the property of model selection aggregation, the performance of is guaranteed to be not much worse than the performance of the original Lasso estimator under mild conditions. This shows that the performance of Trans-Lasso will not be ruined by adversarial auxiliary samples. Formal statements are provided in Section 3.3. On the other hand, the gain of Trans-Lasso relates to the qualities of . If
| (12) |
i.e., is a non-empty subset of the informative set , then the model selection aggregation property implies that the performance of is not much worse than the performance of the Oracle Trans-Lasso with informative auxiliary samples. Ideally, one would like to achieve for some 1 ≤ l ≤ L with high probability. However, it can rely on strong assumptions that may not be guaranteed in practical situations.
To motivate our constructions of , let us first point out a naive construction of candidate sets, which consists of 2K candidates. These candidates are all different combinations of {1, . . . ,K}, denoted by . It is obvious that is an element of these candidate sets. However, the number of candidates is too large and it can be computationally burdensome. Furthermore, the cost of aggregation can be significantly high, which is of order K/n0 as will be seen in Lemma 1. In contrast, we would like to pursue a much smaller number of candidate sets such that the cost of aggregation is almost negligible and (12) can be achieved under mild conditions. We introduce our proposed construction of candidate sets in the next subsection.
3.2. Constructing the Candidate Sets for Aggregation
As illustrated in Section 3.1, the goal of Step 2 is to have a class of candidate sets, , that satisfy (12) under certain conditions. Our idea is to exploit the sparsity patterns of the contrast vectors. Recall that the definition of implies that are sparser than , where . This property motivates us to find a sparsity index R(k) and its estimator for each 1 ≤ k ≤ K such that
| (13) |
where is some subset of . In words, the sparsity indices in are no larger than the sparsity indices in and so are their estimators with high probability. To utilize (13), we can define the candidate sets as
| (14) |
for 1 ≤ l ≤ K. That is, is the set of auxiliary samples whose estimated sparsity indices are among the first l smallest. A direct consequence of (13) and (14) is that and hence the desirable property (12) is satisfied. To achieve the largest gain with transfer learning, we would like to find proper sparsity indices such that (13) holds for as large as possible. Notice that is always included as candidates according to (14). Hence, in the special cases where all the auxiliary samples are informative or none of the auxiliary samples are informative, it holds that and the Trans-Lasso is not much worse than the Oracle Trans-Lasso. The more challenging cases are .
As are not necessarily sparse, the estimation of δ(k) or functions of δ(k), 1 ≤ k ≤ K, is not trivial. As an example, an intuitive sparsity index can be ∥δ(k)∥1 and its estimate is , where is the Lasso estimate of w(k) based on the k-th study. However, such a Lasso-based estimate is not guaranteed to converge to the oracle ∥δ(k)∥1 when δ(k) is non-sparse. Therefore, we consider using , which is a function of the population-level marginal statistics, as the oracle sparsity index for k-th auxiliary sample. The advantage of R(k) is that it has a natural unbiased estimate even when δ(k) is non-sparse. Let us relate R(k) to the sparsity of δ(k) using a Bayesian characterization of sparse vectors assuming Σ(k) = Σ for all 0 ≤ k ≤ K. If are i.i.d. Laplacian distributed with mean zero and variance for each k, then it follows from the properties of Laplacian distribution (Liu and Kozubowski, 2015) that . Hence, the rank of is the same as the rank of . As , it is reasonable to expect . The above derivation holds for many other zero mean prior distributions besides Laplacian. This illustrates our motivation for considering R(k) as the oracle sparsity index.
We next introduce the estimated version, , based on the primary data (after sample splitting) and auxiliary samples . We first perform a SURE screening (Fan and Lv, 2008) on the marginal statistics to reduce the effects of random noises. We summarize our proposal for Step 2 of the Trans-Lasso as follows (Algorithm 3). Let .
Algorithm 3:
Step 2 of the Trans-Lasso Algorithm
| Step 2.1. For 1 ≤ k ≤ K, compute the marginal statistics | ||
| ||
| For each k ∈ {1, . . ., K}, let be obtained by SURE screening such that | ||
| for a fixed , 0 ≤ α < 1. | ||
| Step 2.2. Define the estimated sparse index for the k-th auxiliary sample as | ||
| ||
| Step 2.3. Compute as in (14) for l = 1, . . ., L. |
One can see that are empirical marginal statistics such that for . The set is the set of first t* largest marginal statistics for the k-th sample. The purpose of screening the marginal statistics is to reduce the magnitude of noise. Notice that the un-screened version is a sum of p random variables and it contains noise of order p/(nk ∧ n0), which diverges fast as p is much larger than the sample sizes. By screening with t* of order , α < 1, the errors induced by the random noises is under control. In practice, the auxiliary samples with very small sample sizes can be removed from the analysis as their contributions to the target problem is mild. Desirable choices of should keep the variation of Σδ(k) as much as possible. Under proper conditions, SURE screening can consistently select a set of strong marginal statistics and hence is appropriate for the current purpose. In Step 2.2, we compute based on the marginal statistics which are selected by SURE screening. In practice, different choices of t* may lead to different realizations of . One can compute multiple sets of with different t* which give multiple sets of . It will be seen from Lemma 1 that a finite number of choices on t* does not affect the rate of convergence.
3.3. Theoretical Properties of Trans-Lasso
In this subsection, we derive the theoretical guarantees for the Trans-Lasso algorithm. We first establish the model selection aggregation type of results for the Trans-Lasso estimator .
Lemma 1 (Q-Aggregation for Trans-Lasso).
Assume that Condition 1 and Condition 2 hold true. Let be computed via (10) with . With probability at least 1 − t, it holds that
| (17) |
If L ≤ c1n0 for some small enough constant c1, then
| (18) |
Lemma 1 implies that the performance of only depends on the best candidate regardless of the performance of other candidates under mild conditions. As commented before, this result guarantees the robustness and efficiency of Trans-Lasso, which can be formally stated as follows. As the original Lasso is always in our dictionary, (17) and (18) imply that is not much worse than the Lasso in prediction and estimation. Formally, “not much worse” refers to the last term in (17), which can be viewed as the cost of “searching” for the best candidate model within the dictionary which is of order log L/n0. This term is almost negligible, say, when L = O(K), which corresponds to our constructed candidate estimators. This demonstrates the robustness of to adversarial auxiliary samples. Furthermore, if (12) holds, then the prediction and estimation errors of Trans-Lasso are comparable to the Oracle Trans-Lasso using the auxiliary samples in .
The prediction error bound in (17) follows from Corollary 3.1 in Dai et al. (2012). However, the aggregation methods do not have theoretical guarantees in estimation errors in general. Indeed, an estimator with ℓ2-error guarantee is crucial for more challenging tasks, such as out-of-sample prediction and inference. For our transfer learning task, we show in (18) that the estimation error is of the same order if the cardinality of the dictionary is L ≤ cn0 for some small enough c. For our constructed dictionary, it suffices to require K ≤ cn0. In many practical applications, K is relatively small compared to the sample sizes and hence this assumption is not very restrictive.
In the following, we provide sufficient conditions such that the desirable property (13) holds with defined in (16) and hence (12) is satisfied. For each , define a set
| (19) |
Recall that α < 1 is defined such that t* = nα. In fact, Hk is the set of “strong” marginal statistics that can be consistently selected into for each . We see that if Σ(k) = Σ(0) for . The definition of Hk in (19) allows for heterogeneous designs among non-informative auxiliary samples.
Condition 3.
For each , each row of X(k) is i.i.d. Gaussian with mean zero and covariance matrix Σ(k) and is finite. For each , the random noises are i.i.d. Gaussian with mean zero and variance and is finite.
- It holds that for a small enough constant c1. Moreover,
for some constant c2 > 0.(20)
The Gaussian assumptions in Condition 3(a) guarantee the desirable properties of SURE screening for the non-informative auxiliary studies. In fact, the largest eigenvalue of Σ(k), can grow as for some τ ≥ 0 and τ + α < 1 following the proof in Fan and Lv (2008). The Gaussian assumption can be relaxed to be sub-Gaussian random variables according to some recent studies (Ahmed and Bajwa, 2019). For the conciseness of the proof, we consider Gaussian distributed random variables with bounded eigenvalues. Condition 3(b) puts a constraint on the relative dimensions. It is trivial in the regime that for any finite ξ > 0. The expression (20) requires that for each , there exists a subset of strong marginal statistics with not-so-small cardinality. This condition is mild by choosing α such that and α = 1/2 is an obvious choice revoking the first part of Condition 3(b). For instance, if , then (20) holds with any α ≤ 1/2. In words, a sufficient condition for (20) is that at least one marginal statistic in the k-th study is of constant order for . We see that larger n* makes Condition 3 weaker. As mentioned before, it is helpful to remove the auxiliary samples with very small sample sizes from the analysis.
In the next theorem, we demonstrate the theoretical properties of and provide a complete analysis of the Trans-Lasso algorithm. Let be a subset of such that
for some small constant c1 < 1 and Hk defined in (19). In general, one can see that the informative auxiliary samples with sparser δ(k) are more likely to be included into . Specially, the fact that implies when h is sufficiently small. We will show (13) for such with defined in (16). Let .
Theorem 3 (Convergence Rate of the Trans-Lasso).
Assume Conditions 1, 2, and 3. Then
| (21) |
Let be computed using the Trans-Lasso algorithm with . If and K ≤ cn0 for a sufficiently small constant c > 0, then
| (22) |
as .
Remark 1.
Under the conditions of Theorem 3, if
then and as
Theorem 3 establishes the convergence rate of the Trans-Lasso when is unknown. The result in (21) implies the estimated sparse indices in and in are separated with high probability. As illustrated before, a consequence of (21) is (12) for the candidate sets defined in (14). Together with Theorem 1 and Lemma 1, we arrive at (22).
It is worth mentioning that Condition 3 is only employed to show the gain of Trans-Lasso. The robustness property of Trans-Lasso holds without any conditions on the non-informative samples (Lemma 1). In practice, missing a few informative auxiliary samples may not be a grave concern. One can see that when is large enough such that the first term on the right-hand side of (22) no longer dominates, increasing the number of auxiliary samples will not improve the convergence rate. In contrast, it is more important to guarantee that the estimator is not affected by the adversarial auxiliary samples. The empirical performance of Trans-Lasso is carefully studied in Section 5.
4. Extensions to Heterogeneous Designs
In this section, we extend the algorithms and theoretical results developed in Sections 2 and 3 to the case where the covariates have different covariance structures in different studies.
The Oracle Trans-Lasso algorithm proposed in Section 2 can be directly applied to the setting where the design matrices are moderately heterogeneous. Formally, we first introduce a relaxed version of Condition 1 as follows. Define
which characterizes the differences between Σ(k) and Σ(0) for . Notice that CΣ is a constant if for all , where examples include block diagonal Σ(k) with constant block sizes or banded Σ(k) with constant bandwidths for .
Condition 4.
For each , each row of X(k) is i.i.d. Gaussian with mean zero and covariance matrix Σ(k). The smallest eigenvalue of Σ(k) are bounded away from zero for all . The largest eigenvalue of Σ(0) is bounded away from infinity.
The following theorem characterizes the rate of convergence of the Oracle Trans-Lasso estimator in terms of CΣ. Let .
Theorem 4 (Oracle Trans-Lasso with heterogeneous designs).
Assume that Condition 2 and Condition 4 hold true. Suppose is known with and . We take λw and λδ as in Theorem 1. If , then
| (23) |
The right-hand side of (9) is sharper than s log p/n0 if and . We see that small CΣ is favorable. This implies that the Oracle Trans-Lasso is guaranteed to perform well with sparse contrasts and similar covariance matrices to the primary one.
We now provide theoretical guarantees for the Trans-Lasso with heterogeneous designs when is unknown. In this case, the sparsity index R(k) takes the format . It measures the sparsity of δ(k) but also the covariance heterogeneity. We consider , a subset of such that
for some c1 < 1 and Hk defined in (19). This is a generalization of to the case of heterogeneous designs.
Corollary 1 (Trans-Lasso with heterogeneous designs).
Assume Conditions 2, 3, and 4. Let be computed via the Trans-Lasso algorithm with . If and K ≤ cn0 for a small enough constant c, then
as .
Corollary 1 provides an upper bound for the Trans-Lasso with heterogeneous designs. The numerical experiments for this setting are studied in Section 5.
5. Simulation Studies
In this section, we evaluate the empirical performance of the proposed methods and some other comparable methods in various numerical experiments. Specifically, we evaluate the performance of five methods, including Lasso, Oracle Trans-Lasso proposed in Section 2.1, Trans-Lasso proposed in Section 3.1, and two other ad hoc transfer learning methods related to ours. The first one implements Trans-Lasso except that the bias-correction step (Step 2) of the Oracle Trans-Lasso is omitted. We call this method the “aggregated Lasso” (Agg-Lasso), as it implements our proposed adaptive aggregation step and applies Lasso to each candidate set. The purpose is to understand the necessity of the bias-correction step in Oracle Trans-Lasso. The second one follows the steps of Trans-Lasso but uses a different aggregation step. Specifically, we consider , k = 1, . . . ,K, where and are the Lasso estimators based on each of the corresponding studies. Moreover, the Q-aggregation step is replaced with the cross-validation, where we select the set that minimizes the out-of-sample prediction errors. We call this algorithm “Ad hoc ℓ1-transfer”. The purpose of including this method is to understand the performance of our proposed based on SURE screening and Q-aggregation. In the Supplementary Materials, we report the performance of the estimated sparse indices based on Trans-Lasso and Ad hoc ℓ1-transfer. The R code for all the methods are available at https://github.com/saili0103/TransLasso.
5.1. Identity Covariance Matrix for the Designs
We consider p = 500, n0 = 150, and n1, . . . , nK = 100 for K = 20. The covariates are i.i.d. Gaussian with mean zero and identity covariance matrix for all 0 ≤ k ≤ K and are i.i.d. Gaussian with mean zero and variance one for all 0 ≤ k ≤ K. For the target parameter β, we set s = 16, βj = 0.3 for j ∈ {1, . . . , s}, and βj = 0 otherwise. For the regression coefficients in auxiliary samples, we consider two configurations.
- For a given , if , let
where Hk is a random subset of [p] with |Hk| = h ∈ {2, 6, 12}. If , we set Hk to be a random subset of [p] with |Hk| = 2s and . We set for k = 1, . . . ,K. - For a given , if , let Hk = {1, . . . , 100} and
where h ∈ {2, 6, 12} and N(a, b) is the normal with mean a and standard deviation b. If , we set Hk = {1, . . . , 100} and
We set for k = 1, . . . ,K. The setting (i) can be treated as either ℓ0- or ℓ1-sparse contrasts. In practice, the true parameters are unknown and we use to denote the set of auxiliary samples without distinguishing ℓ0- or ℓ1-sparsity. We consider .
In Figure 1, we report sum of squared estimation errors (SSE) for each estimator . Each point is summarized from 200 independent simulations. As expected, the performance of the Lasso does not change as increases. On the other hand, all four other transfer learning-based algorithms have estimation errors decreasing as increases. As h increases, the problem gets harder and the estimation errors of all four methods increase. In settings (i) and (ii), the Oracle Trans-Lasso has the smallest estimation errors in most settings. The proposed Trans-Lasso, which is agnostic to , is always the second-best. The gap between the Oracle Trans-Lasso and Trans-Lasso is a result of the uncertainty of aggregation and sample splitting for constructing the initial estimators. We also observe that when , the Trans-Lasso can have smaller errors than the oracle Trans-Lasso where the latter one does not use auxiliary information. This implies that some auxiliary information can still be borrowed. Due to the randomness of the parameter generation, our definition of may not always be the best subset of auxiliary samples that give the smallest estimation errors.
Figure 1.
Estimation errors of the Ad hoc ℓ1-transfer, Agg-Lasso, Lasso, Oracle Trans-Lasso, and Trans-Lasso with identity covariance matrices of the predictors. The two rows correspond to configurations (i) and (ii), respectively. The y-axis corresponds to for some estimator b.
Among the two variants, Ad hoc ℓ1-transfer is also adaptive but has slightly larger estimation errors than Trans-Lasso when h is large. This demonstrates the advantage of Q-aggregation with our proposed sparsity index over the cross-validation type of aggregation with ℓ1-distance based sparsity index. The Agg-Lasso method has larger estimation errors than Trans-Lasso and Ad hoc ℓ1-transfer, even when h is small. This demonstrates the necessity of the bias-correction step in the Oracle Trans-Lasso.
5.2. Homogeneous Designs among
We now consider as i.i.d. Gaussian with mean zero and a equi-correlated covariance matrix, where Σj,j = 1 and Σj,k = 0.8 if j ≠ k for . For , are i.i.d. Gaussian with mean zero and a Toeplitz covariance matrix whose first row is
| (24) |
Other true parameters and the dimensions of the samples are set to be the same as in Section 5.1. From the results presented in Figure 2, we see that the Trans-Lasso and Oracle Trans-Lasso have reliable performance in the current setting. The average estimation errors are larger in Figure 2 than those in Section 5.1 as the covariates are highly correlated in the current setting. When h is relatively large, we see that Agg-Lasso and Ad hoc ℓ1-transfer have significantly larger estimation errors than Trans-Lasso. This again demonstrates the advantage of Trans-Lasso over some ad hoc methods.
Figure 2.
Estimation errors of the Ad hoc ℓ1-transfer, Agg-Lasso, Lasso, Oracle Trans-Lasso, and Trans-Lasso with homogeneous covariance matrices. The two rows correspond to configurations (i) and (ii), respectively. The y-axis corresponds to for some estimator b.
5.3. Heterogeneous Designs
We next consider a setting where Σ(k) are distinct for k = 0, . . . ,K. Specifically, for k = 1, . . . ,K, let as i.i.d. Gaussian with mean zero and a Toeplitz covariance matrix whose first row is (24). Moreover, . Other parameters and the dimensions of the samples are set to be the same as in Section 5.1. Figure 3 shows that the general patterns observed under homogeneous designs still hold. Trans-Lasso still gives the best estimation performance under the heterogeneous designs as compared with alternative methods.
Figure 3.
Estimation errors of the Ad hoc ℓ1-transfer, Agg-Lasso, Lasso, Oracle Trans-Lasso, and Trans-Lasso with heterogeneous covariance matrices. The two rows correspond to configurations (i) and (ii), respectively. The y-axis corresponds to for some estimator b.
6. Application to Genotype-Tissue Expression Data
In this section, we demonstrate the performance of our proposed transfer learning algorithm in analyzing the Genotype-Tissue Expression (GTEx) data (https://gtexportal.org/). Overall, the data sets measure gene expression levels from 49 tissues of 838 human donors, in total comprising 1,207,976 observations of 38,187 genes. In our analysis, we focus on genes that are related to the central nervous system (CNS), which were assembled as MODULE 137 (https://www.gsea-msigdb.org/gsea/msigdb/cards/MODULE_137.html). This module includes a total of 545 genes and additional 1,632 genes that are significantly enriched in the same experiments as the genes of the module. A complete list of genes can be found at http://robotics.stanford.edu/~erans/cancer/modules/module_137.
6.1. Data Analysis Method
It is of biological interest to understand the CNS gene regulations in different tissues/cell types. Statistically, we consider predicting the expression levels of a target gene using other CNS genes in multiple tissues. Such an analysis provides insights on how other genes regulate the expression of a target gene. To demonstrate the replicability of our proposal, we consider multiple target genes and multiple target tissues and estimate their corresponding models one by one.
For an illustration of the computation process, we consider gene JAM2 (Junctional adhesion molecule B), as the response variable. JAM2 is a protein coding gene on chromosome 21 interacting with a variety of immune cell types and may play a role in lymphocyte homing to secondary lymphoid organs (Johnson-Léger et al., 2002). Mutations in JAM2 has been found to cause primary familial brain calcification (Cen et al., 2020; Schottlaender et al., 2020). We consider the association between JAM2 and other CNS genes in a brain tissue as the target models and the association between JAM2 and other CNS genes in other tissues as the auxiliary models. As there are multiple brain tissues in the dataset, we treat each of them as the target at each time. The list of target tissues can be found in Figure 4. The min, average, and max of primary sample sizes in these target tissues are 126, 177, and 237, respectively. More information on the target tissues is given in the Supplementary Materials. JAM2 expresses in 49 tissues in our dataset and we use 47 tissues with more than 120 measurements on JAM2. The average number of auxiliary samples for each target model is 14,837 over all the non-target tissues. The covariates in use are the genes that are in the enriched MODULE_137 and do not have missing values in all of the 47 tissues. The final covariates include a total of 1,079 genes. The data is standardized before analysis.
Figure 4.
Prediction errors of Agg-Lasso, Naive Trans-Lasso, Trans-Lasso, and Ad hoc ℓ1-transfer relative to the Lasso evaluated via 5-fold cross validation for gene JAM2 in multiple tissues.
We compare the prediction performance of Trans-Lasso with Lasso, Agg-Lasso, Ad hoc ℓ1-transfer, and Naive Trans-Lasso. Implementation of the first four methods is the same as in Section 5. The Naive Trans-Lasso implements the Oracle Trans-Lasso algorithm assuming all the auxiliary studies are informative. Evaluating this method can help us understand the overall informative level of the auxiliary samples. We split the target sample into five folds and use four folds to train the algorithms and use the remaining fold to test their prediction performance. We repeat this process five times each with a different fold of test samples. We mention that one individual can provide expression measurements on multiple tissues and these measurements are hard to be independent. While the dependence of the samples can reduce the efficiency of the estimation algorithms, using auxiliary samples may still be beneficial. However, one need to choose proper tuning parameters. The tuning parameter for the Lasso and λw are chosen by 8-fold cross-validation. The tuning parameter λδ is set to be . Other tuning parameters and configurations are the same as for the simulations.
6.2. Prediction Performance of the Trans-Lasso for JAM2 Expression
Figure 4 demonstrates the prediction errors of different methods for predicting gene expression JAM2 using other genes. We see that all the transfer learning methods in consideration make improvements over the Lasso in most experiments. The performance of Naive Trans-Lasso implies that there is heterogeneity among tissues and some auxiliary studies can be non-informative. Hence, adaptation to unknown is important. Among the adaptive transfer learning methods, Trans-Lasso achieves the smallest prediction errors in almost all the experiments. Its average gain is 22% comparing to the Lasso. This shows that our characterization of the similarity between a target model and a given auxiliary model is suitable for the current problem. Agg-Lasso gives similar prediction errors as Trans-Lasso in most of the tissues but has significantly worse performance for Cortex, Hippocampus, and Pituitary tissues. The average proportion of explained variance given by the Lasso and that given by the Trans-Lasso are 0.75 and 0.80, respectively, indicating improved fit from transfer learning.
6.3. Prediction Performance of Other 25 Genes on Chromosome 21
To demonstrate the replicability of our proposal, we also consider other genes on chromosome 21 which are in Module_137 as our target genes. We report the overall prediction performance of these 25 genes in Figure 5. A complete list of these genes and some summary information can be found in the Supplementary Materials. Generally speaking, we see that the Trans-Lasso has the best overall performance among all the target tissues when compared to the other two related methods, Agg-Lasso and Ad hoc ℓ1-transfer. The deteriorating performance of the naive Trans-Lasso implies that adaptation to the unknown informative set is crucial for successful knowledge transfer.
Figure 5.
Prediction errors of Ad hoc ℓ1-transfer, Agg-Lasso, Naive Trans-Lasso*, and Trans-Lasso relative to the Lasso for the 25 genes on chromosome 21 and in Module 137, in multiple target tissues. The Naive Trans-Lasso has two outliers for the tissue Cerebellum not showing in the figure with values 1.61 and 1.95.
7. Discussion
This paper studies high-dimensional linear regression in the presence of auxiliary samples. The similarity of the target model and a given auxiliary model is characterized by the sparsity of their contrast vectors. Transfer learning algorithms for estimation and prediction are developed that are adaptive to the unknown informative set. Numerical experiments and GTEx data analysis support the theoretical findings and demonstrate its effectiveness in applications.
In the machine learning literature, transfer learning methods have been proposed for different purposes, but few have statistical guarantees. There are several interesting problems related to the present paper for further research. First, transfer learning in nonlinear models can be studied. Using our similarity characterization of the auxiliary studies, transfer learning in high-dimensional generalized linear models (GLMs) can be formulated. GLMs include logistic and Poisson models that are widely used for classification. The main challenge is that the moment equation above (7) is nonlinear and the resulting is not necessarily sparse. Hence, transfer learning beyond linear models remain open problems and can be studied under different characterizations for the similarity structure. Second, it is interesting to study statistical inference, such as constructing confidence intervals and hypothesis testing with auxiliary samples. Given the results derived in this paper, one may expect weaker sample size conditions in the transfer learning setting than those in the single-task setting. It is interesting to provide a precise characterization and to develop a minimax optimal confidence interval in the transfer learning setting.
Supplementary Material
Acknowledgments
This research was supported by NIH grants GM129781 and GM123056 and NSF Grant DMS-1712735.
Contributor Information
Sai Li, Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennvania, Philadelphia, PA 19104.
T. Tony Cai, Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104.
Hongzhe Li, Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104.
References
- Agarwal A, Negahban S, and Wainwright MJ (2012). Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics 40(2), 1171–1197. [Google Scholar]
- Ahmed T and Bajwa WU (2019). Exsis: Extended sure independence screening for ultrahigh-dimensional linear models. Signal Processing 159, 33–48. [Google Scholar]
- Ando RK and Zhang T (2005). A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6, 1817–1853. [Google Scholar]
- Bastani H (2020). Predicting with proxies: Transfer learning in high dimension. Management Science 67(5), 2657–3320. [Google Scholar]
- Bühlmann P and van de Geer S (2015). High-dimensional inference in misspecified linear models. Electronic Journal of Statistics 9(1), 1449–1473. [Google Scholar]
- Cai TT and Wei H (2021). Transfer learning for nonparametric classification: Minimax rate and adaptive classifier. The Annals of Statistics 49(1), 100–128. [Google Scholar]
- Candes E and Tao T (2007). The dantzig selector: Statistical estimation when p is much larger than n. The annals of Statistics 35(6), 2313–2351. [Google Scholar]
- Cen Z, Chen Y, Chen S, et al. (2020). Biallelic loss-of-function mutations in jam2 cause primary familial brain calcification. Brain 143(2), 491–502. [DOI] [PubMed] [Google Scholar]
- Chen X, Kim S, Lin Q, Carbonell JG, and Xing EP (2010). Graph-structured multi-task regression and an efficient optimization method for general fused lasso. arXiv preprint arXiv:1005.3579. [Google Scholar]
- Cross-Disorder Group of the Psychiatric Genomics Consortium (2019). Genomic relationships, novel loci, and pleiotropic mechanisms across eight psychiatric disorders. Cell 179(7), 1469–1482. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dai D, Han L, Yang T, and Zhang T (2018). Bayesian Model Averaging with Exponentiated Least Squares Loss. IEEE Transactions on Information Theory 64(5), 3331–3345. [Google Scholar]
- Dai D, Rigollet P, and Zhang T (2012). Deviation optimal learning using greedy q-aggregation. The Annals of Statistics 40(3), 1878–1905. [Google Scholar]
- Danaher P, Wang P, and Witten DM (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society. Series B (Statistical methodology) 76(2), 373–397. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Daumé III H (2007). Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 256–263. [Google Scholar]
- Dondelinger F, Mukherjee S, and Initiative ADN (2020). The joint lasso: high-dimensional regression for group structured data. Biostatistics 21(2), 219–235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fagny M, Paulson JN, Kuijjer ML, et al. (2017). Exploring regulation in tissues with eqtl networks. Proceedings of the National Academy of Sciences 114(37), E7841–E7850. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association 96(456), 1348–1360. [Google Scholar]
- Fan J and Lv J (2008). Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70(5), 849–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hu Y, Li M, Lu Q, et al. (2019). A statistical framework for cross-tissue transcriptome-wide association analysis. Nature genetics 51(3), 568–576. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson-Léger CA, Aurrand-Lions M, Beltraminelli N, et al. (2002). Junctional adhesion molecule-2 (jam-2) promotes lymphocyte transendothelial migration. Blood, The Journal of the American Society of Hematology 100(7), 2479–2486. [DOI] [PubMed] [Google Scholar]
- Lee SH, Ripke S, Neale BM, et al. (2013). Genetic relationship between five psychiatric disorders estimated from genome-wide snps. Nature genetics 45, 984–994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li S, Cai TT, and Li H (2020). Supplements to “Transfer learning for high-dimensional linear regression: Prediction, estimation, and minimax optimality”. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li YR, Li J, Zhao SD, and et al. (2015). Meta-analysis of shared genetic architecture across ten pediatric autoimmune diseases. Nature Medicine 21, 1018–1027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y and Kozubowski TJ (2015). A folded laplace distribution. Journal of Statistical Distributions and Applications 2(1), 1–17. [Google Scholar]
- Lounici K, Pontil M, and Tsybakov AB (2009). Taking advantage of sparsity in multi-task learning. arXiv:0903.1468. [Google Scholar]
- Mak TSH, Porsch RM, Choi SW, Zhou X, and Sham PC (2017). Polygenic scores via penalized regression on summary statistics. Genetic Epidemiology 41(6), 469–480. [DOI] [PubMed] [Google Scholar]
- Mei S, Fei W, and Zhou S (2011). Gene ontology based transfer learning for protein subcellular localization. BMC bioinformatics 12, 44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pan W and Yang Q (2013). Transfer learning in heterogeneous collaborative filtering domains. Artificial intelligence 197, 39–55. [Google Scholar]
- Pierson E, Koller D, Battle A, et al. (2015). Sharing and specificity of co-expression networks across 35 human tissues. PLoS Comput Biol 11(5), e1004220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raskutti G, Wainwright MJ, and Yu B (2011). Minimax rates of estimation for high-dimensional linear regression over ℓq-balls. IEEE transactions on information theory 57(10), 6976–6994. [Google Scholar]
- Rigollet P and Tsybakov A (2011). Exponential screening and optimal rates of sparse estimation. The Annals of Statistics 39(2), 731–771. [Google Scholar]
- Schottlaender LV, Abeti R, Jaunmuktane Z, et al. (2020). Bi-allelic jam2 variants lead to early-onset recessive primary familial brain calcification. The American Journal of Human Genetics 106(3), 412–421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shin H-C, Roth HR, Gao M, et al. (2016). Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 35(5), 1285–1298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun T and Zhang C-H (2012). Scaled sparse linear regression. Biometrika 99(4), 879–898. [Google Scholar]
- Sun YV and Hu Y-J (2016). Integrative analysis of multi-omics data for discovery and functional studies of complex human diseases. In Advances in genetics, Volume 93, pp. 147–190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tibshirani R (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58(1), 267–288. [Google Scholar]
- Torrey L and Shavlik J (2010). Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pp. 242–264. IGI Global. [Google Scholar]
- Tsybakov AB (2014). Aggregation and minimax optimality in high-dimensional estimation. In Proceedings of the International Congress of Mathematicians, Volume 3, pp. 225–246. [Google Scholar]
- Turki T, Wei Z, and Wang JT (2017). Transfer learning approaches to improve drug sensitivity prediction in multiple myeloma patients. IEEE Access 5, 7381–7393. [Google Scholar]
- Verzelen N (2012). Minimax risks for sparse regressions: Ultra-high dimensional phenomenons. Electronic Journal of Statistics 6, 38–90. [Google Scholar]
- Wang S, Shi X, Wu M, and Ma S (2019). Horizontal and vertical integrative analysis methods for mental disorders omics data. Scientific Reports, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weiss K, Khoshgoftaar TM, and Wang D (2016). A survey of transfer learning. Journal of Big Data 3, 9. [Google Scholar]
- Zhang C-H (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of statistics 38(2), 894–942. [Google Scholar]
- Zhernakova A, Van Diemen CC, and Wijmenga C (2009). Detecting shared pathogenesis from the shared genetics of immune-related diseases. Nature Reviews Genetics 10(1), 43–55. [DOI] [PubMed] [Google Scholar]
- Zou H (2006). The adaptive lasso and its oracle properties. Journal of the American statistical association 101(476), 1418–1429. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.





