Abstract
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
1. Introduction
Learning to rank is an important research area in machine learning. It has attracted the interests of many researchers because of its growing application in areas like information retrieval systems [1], recommender systems [2, 3], machine translation, and computational biology [4]. For example, in document retrieval domain, a ranking model is trained based on the training data of some queries. Each query contains a group of corresponding retrieved documents and their relevance levels labeled by humans. When a new query arrives for prediction, the trained model is used to rank the corresponding retrieved documents for the query.
Many types of machine learning algorithms have been proposed for the ranking problem. Among them, RankSVM [5], which is extended from the basic support vector machine (SVM) [6], is one of the commonly used methods. The basic idea of RankSVM is transforming the ranking problem into pairwise classification problem. The early implementation of RanSVM [7] was slow because the explicit pairwise transformation led a large number of the training samples. In order to accelerate the training process, [8] proposed a primal Newton method algorithm to solve the linear RankSVM-struct problem without the need of explicit pairwise transformation. And [9] proposed the RankSVM based on the structured output learning framework.
As with the SVM, kernel trick can be used to generalize the linear ranking problem to nonlinear case for RankSVM [7, 9]. Kernel RankSVM can give higher accuracy than the linear RankSVM for complex nonlinear ranking problem [10]. The nonlinear kernel can map the original features into some high-dimensional space where the nonlinear problem can be ranked linearly. However, the training time of kernel RankSVM dramatically grows as the training data set increases in size. The computational complexity is at least quadratic in the number of training examples because of the calculation of kernel matrix. Kernel approximation is an efficient way to solve the above problem. It can avoid computing kernel matrix by explicitly generating a vector representation of data that approximates the kernel similarity between any two data points.
The approximation methods can be classified into two categories: the Nyström method [11, 12] and random Fourier features [13, 14]. The Nyström method approximates the kernel matrix by a low rank matrix. The random Fourier features method approximates the shift-invariant kernel based on Fourier transformation of nonnegative measure [15]. In this paper, we use the kernel approximation method to solve the problem of lengthy training time of kernel RankSVM.
To the best of our knowledge, this is the first work using the kernel approximation method to solve the learning to rank problem. We use two types of approximation methods, namely, the Nyström method or random Fourier features, to map the features into high-dimensional space. After the approximation mapping, primal truncated Newton method is used to optimize pairwise L2-loss (squared Hinge-loss) function of the RankSVM model. Experimental results demonstrate that our proposed method can achieve high performance and fast training speed than the kernel RankSVM. Compared to state-of-the-art ranking algorithms, our proposed method can also get comparable or better performance. Matlab code for our algorithm is available online (https://github.com/KaenChan/rank-kernel-appr).
2. Background and Related Works
In this section, we present the background and related works of learning to rank algorithm and RankSVM.
2.1. Learning to Rank Algorithms
Learning to rank algorithms can be classified into three categories: pointwise approach, pairwise approach, and list-wise approach.
Pointwise: it transforms the ranking problem into regression or classification on single objects. Then existing regression or classification algorithms are directly applied to model the labels of single objects. This approach includes McRank [16] and OC SVM [17].
Pairwise: it transforms the ranking problem into regression or classification on object pairs. It can model the preferences within the object pairs. This approach includes RankSVM [5] and RankBoost [18].
List-wise: it takes ranking lists as instances in both learning and prediction and can optimize the list-wise loss function directly. This approach includes ListNet [19], AdaRank [20], BoltzRank [21], and SVM MAP [22].
In this paper, we focus on the pairwise ranking algorithm based on SVM.
2.2. Linear RankSVM
Linear RankSVM is a commonly used pairwise ranking algorithm [5]. For the web search problem with n queries and a set of documents of each query, features xi ∈ ℝd are extracted from the query-document pair (qi, doci) and label yi ∈ ℤ is the relevance level of the doci to the query qi. Thus, the training data is a set of label-query-instance tuples (yi, qi, xi). Let 𝒫 denote the set of preference pairs. If (i, j) ∈ 𝒫, doci and docj are in the same query (qi = qj) and doci is preferred over docj (yi > yj). The goal of linear RankSVM is to get a ranking function
| (1) |
such that ∀(i, j) ∈ 𝒫, f(xi) > f(xj) = w⊤xi > w⊤xj, and w ∈ ℝd.
RankSVM has a good generalization due to the margin-maximization property. According to [27], the margin is defined as the closest distance between two data points when the data points project to the ranking vector w:
| (2) |
Maximizing the margin is good because data point pairs with small margins represent very uncertain ranking decisions. RankSVM can guarantee to find a ranking vector w with the maximum margin [27]. Figure 1 shows the margin-maximization of four data points for linear RankSVM. The weights of two linear ranking, namely, w1 and w2, can both rank the four data correctly. But w1 generalizes better than w2 because the margin d1 of w1 is larger than the margin d2 of w2.
Figure 1.

Margin-maximization for linear RankSVM. Four data points have the preference x4≻x3≻x2≻x1 and can be linearly ranked. d1 and d2 are the marginal distances for w1 and w2.
For L1-loss (Hinge-loss) linear RankSVM [5], the objective loss function is
| (3) |
where C is the regularization parameter. Equation (3) can be solved by standard SVM classification on pairwise difference vectors (xi − xj). But this method is very slow because of the large size of 𝒫.
In [8], an efficient algorithm was proposed to solve the L2-loss (squared Hinge-loss) linear RankSVM problem
| (4) |
They used a p × n sparse matrix A to obtain the pairwise difference training sample (xi − xj) implicitly (p = |𝒫|). If (i, j) ∈ 𝒫, there exists a number k such that Aki = 1 and Ajk = −1 and the rest is 0. Let X = [x1,…, xn]⊤. Equation (4) can be written as
| (5) |
where D is a p × p diagonal matrix with D(i,j)(i,j) = 1 if 1 − w⊤(xi − xj) > 0 and 0 otherwise. Then, (5) is optimized by primal truncated Newton method in 𝒪(nd + p).
2.3. Kernel RankSVM
The key of kernel method is that if kernel function κ is positive definite, there exists a mapping ϕ into the reproducing kernel Hilbert spaces (RKHS), such that
| (6) |
where 〈·, ·〉 denotes the inner product. The advantage of the kernel method is that the mapping ϕ never has to be calculated explicitly.
For L1-loss RankSVM, the objective loss function with the kernel mapping ϕ has the form [7]
| (7) |
The primal problem of (7) can be transformed to the dual problem using the Lagrange multipliers.
| (8) |
where each Langrage multiplier αij corresponds to the pair index (i, j) in 𝒫 and
| (9) |
Solving the kernel RankSVM is a large quadratic programming problem. Instead of directly computing the matrix Q, we can save the cost by A in (5).
| (10) |
The ranking function of the kernel RankSVM has the form
| (11) |
The computation of Q requires 𝒪(n2) kernel evaluations. It is difficult to scale to large kernel RankSVM by solving (8).
Several works have been proposed to accelerate the training speed of kernel RankSVM, such as 1-slack structural method [9], representer theorem reformulation [27], and pairwise problem reformulation [10]. However, these methods are still slow for large-scale ranking problem because the computational cost is at least quadratic in the number of training examples.
3. RankSVM with Kernel Approximation
3.1. A Unified Model
The drawback of kernel RankSVM is that it needs to store many kernel values κ(xi, xj) during optimization. Moreover, κ(xi, x) needs to be computed for new data x during the prediction, possibly for many vector xi. This problem can be solved by approximating the kernel mapping explicitly:
| (12) |
where is the mapping of kernel approximation. The original feature x can be mapped into the approximated Hilbert space by . The objective function of RankSVM with the kernel approximation can be written as
| (13) |
where ℓ is a loss function for SVM, such as ℓ(t) = max(0,1 − t) for L1-loss SVM and ℓ(t) = max(0,1 − t)2 for L2-loss SVM. The problems of (13) can be solved using linear RankSVM after the approximation mapping. The kernel never needs to be calculated during the training process. Moreover, the weights w can be computed directly without the need of storing any training sample. For new data x, the ranking function is
| (14) |
Our proposed method mainly includes mapping process and ranking process.
Mapping process: the kernel approximation is used to map the original data into high dimensional space. We use two kinds of kernel approximation methods, namely, the Nyström method and random Fourier features, which will be discussed in Section 3.2.
Ranking process: the linear RankSVM is used to train a ranking model. We use the L2-loss RankSVM because of its high accuracy and fast training speed. The optimization procedure will be described in Section 3.3.
The Nyström method is data dependent and the random Fourier features method is data independent [28]. The Nyström method can usually get a better approximation than random Fourier features, whereas the Nyström method is slightly slower than the random Fourier features. Additionally, in the ranking process, we can replace the L2-loss RankSVM with any other linear ranking algorithms, such as ListNet [19] and FRank [23].
3.2. Kernel Approximation
3.2.1. Nyström Method
Nyström method gets a low-rank approximation of kernel matrix K = [κ(xi, xj)]n×n by uniformly sampling m ≪ n examples from X, denoted by . Let and . The rows and columns of C and K can be rearranged as
| (15) |
where K21 ∈ ℝ(n−m)×m and K22 ∈ ℝ(n−m)×(n−m). Then the rank-k approximation matrix of K can be calculated as [11]
| (16) |
where Wk+ is the pseudo-inverse of Wk and Wk is the best k-rank approximation of W. The solution of Wk can be obtained by singular value decomposition (SVD) of W, W = UΣU⊤, where U is an orthonormal matrix and Σ = diag(σ1, σ2,…, σm) is the diagonal matrix with σ1 ≥ σ2 ≥ ⋯≥σm ≥ 0. The solution of Wk+ can be obtained as
| (17) |
where Uk is the first k columns of U and Σk = diag(σ1,…, σk). Thus, the nonlinear feature mapping of Nyström method can be written as [28]
| (18) |
The algorithm of the Nyström method is described in Algorithm 1. The total time complexity of the approximation of n samples is 𝒪(nmk + m3). The approximation error of the Nyström method is 𝒪(m−1/2) [11].
Algorithm 1.

Nyström method.
3.2.2. Random Fourier Features
Random Fourier features is an efficient feature transformation method for kernel matrix approximation by calculating the inner product of relatively low dimensional mappings.
When kernel κ(x, y) is shift-invariant, continuous, and positive-definite, the Fourier transform of the kernel can be written as
| (19) |
where p(ω) is a probability density function and ω ∈ ℝd. According to Bochner's theorem [15], the kernel can be approximated as
| (20) |
where ω is sampled from p(ω). Since p(ω) and κ(x, y) are real, where b is drawn uniformly from [0,2π] [13]. The expectation in (20) can be approximated by the mean over m Fourier components as
| (21) |
where ωi ∈ ℝd is sampled from the distribution p(ω) and bi ∈ ℝ is uniformly sampled from [0,2π]. The algorithm is described in Algorithm 2. The total time complexity of the approximation of n samples is 𝒪(nmd). The approximation error of the Nyström method is 𝒪(n−1/2 + m−1/2) [14].
Algorithm 2.

Random Fourier features.
3.3. Ranking Optimization
In this section, we solve the L2-loss (squared Hinge-loss) ranking problem of (13) after the kernel approximation mapping of training data
| (22) |
Similar as (5), the loss function can be rewritten as
| (23) |
where . The gradient and the generalized Hessian matrices of (23) are
| (24) |
where I is the identity matrix. The Hessian matrix does not need to be computed explicitly using truncated Newton method [8]. The Newton step H−1g can be approximately computed using linear conjugate gradient (CG). The main computation of linear CG method is the Hessian-vector multiplication Hs for some vector s
| (25) |
Assuming that the embedding space has m dimensions, the total complexity of this method is 𝒪(nm + p) where p = |𝒫|. The main step of our proposed algorithm is described in Algorithm 3. We calculate the approximation embedding using the Nyström method or random Fourier features in line (1). Then is applied to all training samples in line (2). The linear RankSVM model with primal truncated Newton method is applied in the embedding space in line (3)–(11).
Algorithm 3.

RankSVM with kernel approximation.
4. Experiments
4.1. Experimental Settings
We use three data sets from LETOR (http://research.microsoft.com/en-us/um/beijing/projects/letor), namely, OHSUMED, MQ2007, and MQ2008, to validate our proposed ranking algorithm. The examples of the data sets are extracted from the information retrieval data collections. These data sets are often used for evaluating new learning to rank algorithms. Table 1 lists the properties of the data sets. Mean average precision (MAP) [29] and normalized discounted cumulative gain (NDCG) [30] are chosen as the evaluation metrics on the performance of the ranking models.
Table 1.
Information of used LETOR data sets. Q-D Pairs denote Query-Document Pairs. Each pair of the data sets has relevance labels in 0 (nonrelevant), 1 (possibly relevant), and 2 (relevant).
| Data set | Queries | Q-D Pairs | Features | Relevance |
|---|---|---|---|---|
| TD2004 | 75 | 74,146 | 64 | {0,1} |
| OHSUMED | 106 | 16,140 | 45 | {0,1, 2} |
| MQ2007 | 1692 | 69,623 | 46 | {0,1, 2} |
We compare our proposed method with linear and kernel RankSVM as follows:
RankSVM-Primal [8]: it is discussed in Section 2.1 by solving the primal problem of linear L2-loss RankSVM (http://olivier.chapelle.cc/primal/).
RankSVM-Struct [9]: it solves an equivalent 1-slack structural SVM problem with linear kernel (http://www.cs.cornell.edu/People/tj/svm_light/svm_rank.html).
RankSVM-TRON [10]: it solves the linear or kernel ranking SVM problem by trust region Newton method (https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/).
RankNystöm: our proposed RankSVM with the Nyström kernel approximation.
RankRandomFourier: our proposed RankSVM with the random Fourier features kernel approximation.
The hyperparameters of the algorithms are selected by grid search. The regularization parameter C of each algorithm is chosen from [2−12, 2−11,…, 26]. For kernel RankSVM and our approximation methods, the parameter γ of RBF kernel is chosen from [2−12, 2−11,…, 22]. For MQ2007 dataset, the number of sampling for kernel approximation m is set to 2000, whereas m = 500 for the other datasets. All experiments are conducted on a high performance server with 2.0 GHz 16-cores CPU and 64 GB of memory.
4.2. Comparison of the Nyström Method and Random Fourier Features
Figure 2 shows the performance comparison of RankSVM with the Nyström method and random Fourier features on MQ2007 dataset. We take the linear RankSVM algorithm, RankSVM-Primal, as the baseline method, which is plotted as dotted line. The remaining two lines represent RankNyström and RankRandomFourier, respectively. In the beginning, the performances of kernel approximate methods are worse than linear RankSVM. But along with the increase of m (the number of sampling of approximation), both of the kernel approximate methods can outperform the linear RankSVM. We also observe that RankNyström gets better results than RankRandomFourier when m is small and the two methods obtain similar results when m = 2000.
Figure 2.
Performance comparison of RankSVM with the Nyström method and random Fourier features on MQ2007 dataset. (a) NDCG@1; (b) NDCG@3; (c) MeanNDCG; (d) MAP.
4.3. Comparison with Linear and Kernel RankSVM
In this part, we compare our proposed kernel approximation ranking algorithms to other linear and kernel RankSVM algorithms. We take N = 2000 for the kernel approximation. Table 2 gives the results of different RankSVM algorithms on the first fold of MQ2007 dataset. The linear RankSVM algorithms use less training time, but their MeanNDCG values are lower than the values of the kernel RankSVM algorithms. Our kernel approximation methods obtain better performance than the kernel RankSVM-TRON with much faster training speed in this dataset. The training time of our kernel approximation methods is about ten seconds, whereas the training time of the kernel RankSVM-TRON is more than 13 hours. The result of random Fourier features is slightly better than the RankNyström method. Moreover, the L2-loss RankSVM can get better performance than the L1-loss RankSVM on this dataset. The MeanNDCG of RankSVM-Primal (linear) is slightly higher than RankSVM-TRON (linear). The kernel approximation methods get better MeanNDCG than RankSVM-TRON with RBF kernel.
Table 2.
Results of different RankSVM algorithms on the first fold of MQ2007 dataset. We take m = 2000 for the kernel approximation method.
| Algorithm | Type | Loss | C | g | Mean-NDCG | Time (s) |
|---|---|---|---|---|---|---|
| RankSVM-TRON | linear | L1 | 2−5 | — | 0.5265 | 1.9 |
| RankSVM-Struct | linear | L1 | 2−1 | — | 0.5268 | 2.2 |
| RankSVM-Primal | linear | L2 | 2−10 | — | 0.5270 | 1.2 |
| RankSVM-TRON | RBF | L1 | 2−2 | 2−5 | 0.5310 | 47463.5 |
|
| ||||||
| RankNystöm | RBF | L2 | 2−2 | 2−5 | 0.5330 | 10.9 |
| RankRandomFourier | RBF | L2 | 2−2 | 2−5 | 0.5336 | 16.1 |
4.4. Comparison with State-of-the-Art
In this part, we compare our proposed algorithm with the state-of-the-art ranking algorithms. Most of the results of the comparison algorithms come from the baselines of LETOR. The remaining results come from the papers of the algorithms. The hyperparameters C and γ of our proposed kernel approximation RankSVM are selected by grid search as in Section 4.1.
Table 3 provides the comparison of testing NDCG and MAP results of different ranking algorithms on the TD2004 dataset. The number of sampling for kernel approximation m is set to 500. We can observe that the kernel approximation ranking methods can achieve the best performances on 3 terms of all the 6 metrics. Also, the results of RankNyström and RankRandomFourier are similar.
Table 3.
Performance comparison on TD2004 data set.
| NDCG@1 | NDCG@3 | NDCG@5 | P@1 | P@3 | MAP | |
|---|---|---|---|---|---|---|
| AdaRank-MAP [20] | 0.4133 | 0.4017 | 0.3932 | 0.4133 | 0.3422 | 0.3308 |
| AdaRank-NDCG [20] | 0.3600 | 0.3838 | 0.3769 | 0.3600 | 0.3289 | 0.2986 |
| FRank [23] | 0.4400 | 0.4479 | 0.4362 | 0.4400 | 0.3867 | 0.3809 |
| ListNet [19] | 0.4400 | 0.4371 | 0.4209 | 0.4400 | 0.4000 | 0.3721 |
| RankBoost [18] | 0.4800 | 0.4640 | 0.4368 | 0.4800 | 0.4044 | 0.3835 |
| RankSVM-Struct [9] | 0.4400 | 0.4092 | 0.3935 | 0.4400 | 0.3511 | 0.3505 |
| RankSVM-Primal [8] | 0.4666 | 0.4468 | 0.4277 | 0.4666 | 0.4000 | 0.3793 |
|
| ||||||
| RankNystöm | 0.4933 | 0.4348 | 0.4254 | 0.4933 | 0.3911 | 0.3899 |
| RankRandomFourier | 0.4933 | 0.4422 | 0.4265 | 0.4933 | 0.4000 | 0.3924 |
Table 4 provides the performance comparison on the OHSUMED dataset. m is set to 500. We once observe that RankRandomFourier achieves the best performances on 3 metrics of all the 6 metrics. RankNyström gets the best results on 2 metrics.
Table 4.
Performance comparison on OHSUMED data set.
| NDCG@1 | NDCG@3 | NDCG@5 | P@1 | P@3 | MAP | |
|---|---|---|---|---|---|---|
| RankSVM-Struct [9] | 0.5515 | 0.4850 | 0.4729 | 0.6338 | 0.5898 | 0.4478 |
| ListNet [19] | 0.5326 | 0.4732 | 0.4432 | 0.6524 | 0.6016 | 0.4457 |
| AdaRank-MAP [20] | 0.5388 | 0.4682 | 0.4613 | 0.6338 | 0.5895 | 0.4487 |
| AdaRank-NDCG [20] | 0.5330 | 0.4790 | 0.4673 | 0.6719 | 0.5984 | 0.4498 |
| RankBoost [18] | 0.4632 | 0.4555 | 0.4494 | 0.5576 | 0.5609 | 0.4411 |
| RankRLS [24] | 0.5490 | 0.4770 | 0.4530 | 0.6440 | 0.5860 | 0.4470 |
| RankSVM-Primal [8] | 0.5645 | 0.5004 | 0.4782 | 0.6710 | 0.6112 | 0.4439 |
|
| ||||||
| RankNystöm | 0.5730 | 0.4874 | 0.4780 | 0.6801 | 0.5890 | 0.4473 |
| RankRandomFourier | 0.5728 | 0.4965 | 0.4804 | 0.6801 | 0.5983 | 0.4472 |
Table 5 provides the comparison of results on the MQ2007 dataset. m is set to 2000. We observe that RankNyström obtains the best scores on 3 metrics on MQ2007 dataset. BL-MART also achieves the best scores on 3 metrics. However, BL-MART trains 10,000 LambdaMART and creates bagged model by randomly selecting a subset of these models, whereas our proposed RankNyström algorithm only trains one model.
Table 5.
Performance comparison on MQ2007 data set.
| NDCG@1 | NDCG@3 | MeanNDCG | P@1 | P@3 | MAP | |
|---|---|---|---|---|---|---|
| RankSVM-Struct [9] | 0.4096 | 0.4063 | 0.4966 | 0.4746 | 0.4315 | 0.4645 |
| ListNet [19] | 0.4002 | 0.4091 | 0.4988 | 0.4640 | 0.4334 | 0.4652 |
| AdaRank-MAP [20] | 0.3821 | 0.3984 | 0.4891 | 0.4392 | 0.4230 | 0.4577 |
| AdaRank-NDCG [20] | 0.3876 | 0.4044 | 0.4914 | 0.4475 | 0.4305 | 0.4602 |
| RankBoost [18] | 0.4134 | 0.4072 | 0.5003 | 0.4823 | 0.4348 | 0.4662 |
| LambdaMART [25] | 0.4147 | 0.4119 | 0.5011 | — | — | 0.4660 |
| BL-MART [25] | 0.4200 | 0.4224 | 0.5093 | — | — | 0.4730 |
| CRR [26] | — | — | 0.5000 | — | — | 0.4660 |
| RankSVM-Primal [8] | 0.4109 | 0.4063 | 0.4973 | 0.4747 | 0.4317 | 0.4655 |
|
| ||||||
| RankNystöm | 0.4242 | 0.4138 | 0.5036 | 0.4888 | 0.4394 | 0.4695 |
| RankRandomFourier | 0.4224 | 0.4136 | 0.5036 | 0.4871 | 0.4386 | 0.4698 |
5. Conclusions
In this paper, we propose a fast RankSVM algorithm with kernel approximation to solve the problem of lengthy training time of kernel RankSVM. First, we proposed a unified model for kernel approximation RankSVM. Approximation method is used to avoid computing kernel matrix by explicitly approximating the kernel similarity between any two data points. Then, two types of methods, namely, the Nyströem method and random Fourier features, are explored to approximate the kernel matrix. Also, the primal truncated Newton method is used to optimize the L2-loss (squared Hinge-loss) objective function of the ranking model. Experimental results indicate that our proposed method requires much less computational cost than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. In the future, we plan to use more efficient kernel approximation and ranking models for large-scale ranking problems.
Acknowledgments
This work was mainly supported by Natural Science Foundation of China (61125201, 61303070, U1435219).
Competing Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
- 1.Liu T.-Y. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval. 2009;3(3):225–231. doi: 10.1561/1500000016. [DOI] [Google Scholar]
- 2.Lv Y., Moon T., Kolari P., Zheng Z., Wang X., Chang Y. Learning to model relatedness for news recommendation. Proceedings of the 20th International Conference on World Wide Web (WWW '11); April 2011; ACM; pp. 57–66. [DOI] [Google Scholar]
- 3.Yu Y., Wang H., Yin G., Wang T. Reviewer recommendation for pull-requests in GitHub: what can we learn from code review and bug assignment? Information and Software Technology. 2016;74:204–218. doi: 10.1016/j.infsof.2016.01.004. [DOI] [Google Scholar]
- 4.Duh K., Kirchhoff K. Learning to rank with partially-labeled data. Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; July 2008; Singapore. ACM; pp. 251–258. [Google Scholar]
- 5.Joachims T. Optimizing search engines using clickthrough data. Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '02); 2002; Edmonton, Canada. ACM; pp. 133–142. [DOI] [Google Scholar]
- 6.Boser B. E., Guyon I. M., Vapnik V. N. A training algorithm for optimal margin classifiers. Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory; July 1992; Pittsburgh, Pa, USA. pp. 144–152. [Google Scholar]
- 7.Joachims T. Optimizing search engines using clickthrough data. Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2002; Edmonton, Canada. pp. 133–142. [Google Scholar]
- 8.Chapelle O., Keerthi S. S. Efficient algorithms for ranking with SVMs. Information Retrieval. 2010;13(3):201–215. doi: 10.1007/s10791-009-9109-9. [DOI] [Google Scholar]
- 9.Joachims T. Training linear SVMs in linear time. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2006; ACM; pp. 217–226. [Google Scholar]
- 10.Kuo T.-M., Lee C.-P., Lin C.-J. Large-scale kernel rankSVM. Proceedings of the SIAM International Conference on Data Mining (SDM '14); 2014; pp. 812–820. [Google Scholar]
- 11.Drineas P., Mahoney M. W. On the Nyström method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research. 2005;6:2153–2175. [Google Scholar]
- 12.Williams C. K. I., Seeger M. Advances in Neural Information Processing Systems 13. Cambridge, Mass, USA: MIT Press; 2001. Using the Nyström method to speed up Kernel machines; pp. 682–688. [Google Scholar]
- 13.Rahimi A., Recht B. Advances in Neural Information Processing Systems 20. Newry, UK: Curran Associates; 2007. Random features for large-scale kernel machines; pp. 1177–1184. [Google Scholar]
- 14.Rahimi A., Recht B. Advances in Neural Information Processing Systems. 2008. Weighted sums of random kitchen sinks: replacing minimization with randomization in learning; pp. 1313–1320. [Google Scholar]
- 15.Rudin W. Fourier Analysis on Groups. New York, NY, USA: Springer; 2003. [Google Scholar]
- 16.Li P., Wu Q., Burges C. J. Advances in Neural Information Processing Systems. MIT Press; 2007. McRank: learning to rank using multiple classification and gradient boosting; pp. 897–904. [Google Scholar]
- 17.Shashua A., Levin A. Advances in Neural Information Processing Systems. 2002. Ranking with large margin principle: two approaches; pp. 937–944. [Google Scholar]
- 18.Freund Y., Iyer R., Schapire R. E., Singer Y. An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research. 2003;4(6):933–969. [Google Scholar]
- 19.Cao Z., Qin T., Liu T.-Y., Tsai M.-F., Li H. Learning to rank: from pairwise approach to listwise approach. Proceedings of the 24th International Conference on Machine Learning (ICML '07); June 2007; ACM; pp. 129–136. [DOI] [Google Scholar]
- 20.Xu J., Li H. AdaRank: a boosting algorithm for information retrieval. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '07); July 2007; ACM; pp. 391–398. [DOI] [Google Scholar]
- 21.Volkovs M. N., Zemel R. S. BoltzRank: learning to maximize expected ranking gain. Proceedings of the 26th International Conference on Machine Learning (ICML '09); June 2009; ACM; pp. 1089–1096. [Google Scholar]
- 22.Yue Y., Finley T., Radlinski F., Joachims T. A support vector method for optimizing average precision. Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '07); July 2007; ACM; pp. 271–278. [DOI] [Google Scholar]
- 23.Tsai M. F., Liu T. Y., Qin T., Chen H. H., Ma W. Y. FRank: a ranking method with fidelity loss. Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '07); July 2007; Amsterdam, the Netherlands. pp. 383–390. [Google Scholar]
- 24.Pahikkala T., Tsivtsivadze E., Airola A., Boberg J., Salakoski T. Learning to rank with pairwise regularized least-squares. Proceedings of the SIGIR Workshop on Learning to Rank for Information Retrieval; 2007; pp. 27–33. [Google Scholar]
- 25.Ganjisaffar Y., Caruana R., Lopes C. V. Bagging gradient-boosted trees for high precision, low variance ranking models. Proceedings of the 34th ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR '11); July 2011; Beijing, China. ACM; pp. 85–94. [Google Scholar]
- 26.Sculley D. Combined regression and ranking. Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2010; Washington, DC, USA. ACM; pp. 979–988. [Google Scholar]
- 27.Yu H., Kim S. Handbook of Natural Computing. Berlin, Germany: Springer; 2012. SVM tutorial—classification, regression and ranking; pp. 479–506. [DOI] [Google Scholar]
- 28.Yang T., Li Y. F., Mahdavi M., Jin R., Zhou Z. H. Advances in Neural Information Processing Systems. MIT Press; 2012. Nyström method vs random Fourier features: a theoretical and empirical comparison; pp. 485–493. [Google Scholar]
- 29.Baeza-Yates R., Ribeiro-Neto B. Modern Information Retrieval. Vol. 463. New York, NY, USA: ACM Press; 1999. [Google Scholar]
- 30.Järvelin K., Kekäläinen J. IR evaluation methods for retrieving highly relevant documents. Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; July 2000; Athens, Greece. ACM; pp. 41–48. [Google Scholar]

