Skip to main content
Springer logoLink to Springer
. 2016 Jul 15;5(1):26. doi: 10.1007/s13721-016-0134-5

Implementation and comparison of kernel-based learning methods to predict metabolic networks

Abiel Roche-Lima 1,
PMCID: PMC4947111  PMID: 27471658

Abstract

Metabolic pathways can be conceptualized as the biological equivalent of a data pipeline. In living cells, series of chemical reactions are carried out by different proteins called enzymes in a stepwise manner. However, many pathways remain incompletely characterized, and in some of them, not all enzyme components have been identified. Kernel methods are useful in many difficult problem areas, such as document classification and bioinformatics. Specifically, kernel methods have been used recently to predict biological networks, such as protein–protein interaction networks and metabolic networks. In this paper, we implement and compare different methods and types of data to predict metabolic networks. The methods are Penalized Kernel Matrix Regression (PKMR) and pairwise Support Vector Machine (pSVM). We develop several experiments using these methods with sequence, non-sequence, and combined data. We obtain better accuracy when the sequence data are used in both methods. Whereas when the methods are compared using the same type of data, the pSVM approach shows better accuracy. The best results are obtained with pSVM using all combined kernels.

Keywords: Network prediction, Metabolic pathways, Machine learning, Kernel methods

Introduction

Biochemical pathways are chemical reactions in the cell where enzymes catalyse reactions to produce other compounds based on substrates. For example, in the metabolic pathway that involves glycolysis, the glucose is broken down into smaller products, such as carbon dioxide and water (Luo et al. 2007). Finding the enzymes involved in the reactions and their interactions is still a very challenging topic. The development of pathway databases, such as KEGG (Kanehisa et al. 2008) and EcoCyc (Latendresse et al. 2012), has increased the current knowledge about metabolic networks. Using these databases, methods based on gene annotations are used to predict metabolic networks (Latendresse et al. 2012; Karp et al. 2011). However, current genome annotation pipelines may fail to assign identities correctly to score genes and to detect other genes altogether. Thus, metabolic network prediction algorithms using current genome annotation pipelines may predict inaccurate interactions, for example, the Pathway Tools described by Karp et al. (2011).

To infer metabolic networks, supervised learning approaches have been developed in the framework of kernel methods by Kotera et al. (2013), such as Support Vector Machines (SVMs). While SVMs are a classical paradigm in machine learning, they cannot be directly applied to the biological network inference problems, since the goal is to predict pair of genes (Ben-Hur and Noble 2005). Thus, the pairwise Support Vector Machine (pSVM) approach is used instead (Oyama and Manning 2004). Vert et al. (2007) and Kashima et al. (2010) apply pSVM methods to predict metabolic networks, but only combine non-sequence data. In addition, Roche-Lima et al. (2014) use sequence kernels (i.e., PRK—Pairwise Rational Kernels) combined with SVM methods and obtain good accuracy values and execution times, but do not compare with non-sequence kernels.

There are other supervised learning algorithms, such as Kernel Canonical Correlation Analysis (KCCA) (Yamanishi et al. 2004) and Penalized Kernel Matrix Regression (PKMR) (Yamanishi and Vert 2007), which are computationally more efficient, but they lack the ability to give precise predictions. In addition, these algorithms have only been reported in the literature using non-sequence kernels (Yamanishi 2010; Kotera et al. 2012).

In our research, we consider these problems, implementing methods to predict metabolic networks based on raw data directly related to the sequence information (e.g., nucleotides and protein sequences). We hypothesize that sequence kernels, created from raw sequence data, will be more precise that non-sequence kernels to predict metabolic networks. We then implement two of the supervised learning methods (i.e., PKMR and pSVM), and for first time, we compare these two methods combined with sequence and non-sequence kernels.

Materials and methods

Metabolic networks

Metabolic networks were represented as a graph, where vertices (nodes) were the enzymes, and the edges (branches) were the enzyme–enzyme relations (proteins catalyzing two continuous reactions in a pathway). Traditionally, metabolic pathway representations considered enzymes as vertices, and metabolites as edges. To avoid confusion, our graphs represented interactions between pairs of enzymes as discrete data points similar to Yamanishi (2010). An example of the graph representation can be seen in (Roche-Lima et al. 2014, Fig. 2).

Data

We used information of the yeast Saccharomyces cerevisiae (Sikorski and Hieter 1989) taken from the KEGG pathway databases (Kanehisa et al. 2008). This species was selected, because it was a well-studied organism with several defined models to predict biological networks. Moreover, other kernel methods had been described and tested using data from this species (Ben-Hur and Noble 2005; Kashima et al. 2010; Yamanishi 2010). As a training set, we used 5149 interactions from 755 known genes. A graph was built based on these interactions (training set) as a representation of the metabolic networks of the yeast Saccharomyces cerevisiae. Then, this graph and other related data sets were converted into kernels.

Kernels allowed working in a unified mathematical framework across different types of data. A kernel was a measure of similarity that satisfied the additional condition of being a dot product in some feature space (see Scholkopf and Smola 2002 for details). Data were represented as a positive definite kernel K that was a symmetric function K:X2R that satisfied i,jnaiajKXxiXj and (a1,a2,,an)Rn, where X was the set of entities.

Non-sequence data

In our context, non-sequence kernels manipulate data that were binary or numerical. We used three different types of non-sequence data, i.e., gene expression, gene localization, and phylogenetic data. All these data have been used in other research as kernels (Vert et al. 2007; Kashima et al. 2010; Yamanishi 2010). Gene expression data were obtained by Yamanishi (2010) using the results from 157 microarray experiments (Spellman et al. 1998; Eisen et al. 1998). Each gene was associated with a 157-element numerical vector that represented the results from the experiments. Gaussian Radial Bases Function (RBF) Kernel was used to manipulate this data, and we defined the same parameters that Yamanishi (2010) used in their experiments. We denoted the final kernel as k exp.

The gene localization data were represented as a 23-element binary vector for each gene, following Yamanishi (2010). A total of 23 intracellular localizations were defined (e.g., mitochondrion, Golgi, nucleus, and others). The value was 1, if the gene was present in the intracellular localization or 0 otherwise. Similar to Yamanishi (2010), we used the linear kernel applied to these data with the same parameters. We denoted this kernel as k loc.

The phylogenetic profile data were obtained from 145 organisms, which describe the set of orthologous genes. These organisms were selected based on the criteria defined in Yamanishi (2010). Each gene was associated with a 145-element binary vector. The value was 1, if the gene was present in this organism or 0 otherwise. A Gaussian RBF kernel was used to compute this data with the same parameters used by Yamanishi (2010). This final kernel was denoted as k phy.

Sequence data

Sequence kernels defined similarities over finite sequences of symbols with different lengths. The sequence data were then converted to sequence kernels. In our research, we used three sequence kernels, Pfam, Motif, and Spectrum, defined by Ben-Hur and Noble (2005). We chose these sequence kernels to be able to compare our results with the previous published works, such as Yu et al. (2010); Ben-Hur and Noble (2005); Roche-Lima et al. (2014), and Allauzen et al. (2008).

The Pfam kernel (Gomez et al. 2003) was computed based on a set of Hidden Markov Models (HMMs), where each gene that codes for an enzyme was compared with every HMM in the Pfam database. The E value statistics were obtained as features for the 13,672 domain HMMs in the Pfam version 26.0 (Punta et al. 2012). Thus, each protein was represented by a vector of 13,672 log E values, and the kernel was computed based on these vectors (see Allauzen et al. (2008) for more details). We denoted this kernel as k pfam.

The Motif kernel (Ben-Hur and Brutlag 2003) was also used. It was obtained by calculating how many times a discrete sequence motif matched each of the protein sequences. The eMotif database (Huang and Brutlag 2001) was used to extract the discrete sequence motifs. A vector of E values was associated for each of the proteins (genes coding for the proteins). The kernel was finally computed as dot products of those vectors (see Ben-Hur and Noble 2005 for more details). The kernel was called k motif.

Finally, the Spectrum kernel defined by Leslie et al. (2004) was also considered. This kernel represented sequence similarities by counting how many times an n-gram (k mer) appeared in each of the pairs of sequences. Each gene had an associated featured vector of n-gram counts (we considered n = 3). Similar to the data above, the kernel was computed to represent the dot products using the associated feature vectors. We denoted this kernel as k ngram.

Combined data

We also computed the linear combination of the kernels described above, representing the heterogeneous data combination. We used different types of data to predict metabolic networks. K1,,Kn were the kernels that represented the data, so Kn corresponded to the n-th data set. Yamanishi (2010) mentioned the advantages of considering the linear combination as weighted sum of kernels, i.e., n=1NWnKn, where Wn was a weight (real coefficient) associated to the kernel Kn. The coefficients should be related to the importance of the data set n for the prediction method. In our research, we considered the weights (Wn) as the accuracy values obtained during the inference process using the individual kernel Kn, i.e., ROC score Yamanishi et al. (2005). In future studies, weight values may be computed in different ways.

Methods

We used kernel-based supervised learning network inference methods to predict biological networks based on kernel frameworks. First, part of the network (with known interactions—training set) was used during the learning inference process to obtain the model. Second, new interactions were predicted using the model. In machine learning, supervised classifications are a classical paradigm. However, it could not be applied directly to the problem of network inference, because our goal was to predict relations between pairs of nodes, not individual nodes (Yamanishi 2010). Therefore, we first define the pairwise kernel and, later, the methods PKMR and pSVM.

Pairwise kernels

The kernels described in the sections above provide similarities between simple enzymes. In our experiments, we used a different type of kernel called pairwise kernel (Pahikkala et al. 2012; Brunner et al. 2012) that provide similarity measures for pairs of entities. The general pairwise kernel was represented as K:(X×X)(X×X)R, where X is a set of vertices (enzymes) and R is a set of real values. In this research, we used the Pairwise Tensor Product Kernel or Kronecker Kernel (Basilico and Hofmann 2004; Oyama and Manning 2004; Ben-Hur and Noble 2005) that is computed as KX1,X2,X1,X2=kX1,X1kX2,X2+kX1,X2kX2,X1, where k is a simple kernel and X1,X2,X1,X2 are the enzymes (k represent any of the kernel described in the previous sections).

Penalized kernel matrix regression (PKMR)

Kernel Matrix Regression methods were based on the supervised graph inference framework to predict metabolic networks with metric learning. A formalism of the problem can be defined as follows:

given an undirected graph Γ=(V,E), with a set of vertices V=(V1,V2,,Vn) and a set of edges EV×V,

then, for an additional set of vertices V=V1,V2,,Vn, the goal was to infer the set of new edges EV×V+VV+V×V, that involved the additional vertices in V.

Yamanishi et al. (2005) described methods to solve this problem, such as KCCA, PKMR, Kernel Matrix Completion, and Expectation-Maximization algorithms. He obtained as a result that the method with the best accuracy was PKMR (Yamanishi and Vert 2007), a modified version of Kernel Matrix Regression method.

In our research, we implemented PKMR using the R library (R Core Team 2013). To make the data compatible with the pSVM method, the chemical compatibility network was not taken into consideration (Yamanishi 2010), since we aimed to compare data directly related to the genes. Future implementations may include this information.

Pairwise support vector machine (pSVM)

pSVM methods classified whether a pair (x1,y1) belonged to the same category or to a different one. Then, while SVM methods classified simple entities, pSVM methods classified pairs of entities. pSVM was defined by Brunner et al. (2012) as follows:

given a training data set ((xi,yi),di),di with binary classification values (i.e., (xi,yi) classified as +1 or (xi,yi) classified as −1), i = 1, …, n and the function Φ,

then, a pSVM method found an optimal hyperplane, i.e., wTΦxi,yi+b=0, where the points were separated into two categories.

We implemented programs to apply pSVM to predict the metabolic networks with our data sets, using LIBSVM (Chang and Lin 2011) and Python Machine Learning (PyML) (Ben-Hur et al. 2008) libraries.

Experiments

We developed six groups of experiments using different data and methods (see Table 1 for more details).

Table 1.

Experiments are grouped by methods (experiment I, II, III—PKMR and experiment IV, V, VI—pSVM) and by type of data (I, IV—non-sequence data, II, V—sequence data, and III, VI—combined data)

Experiment Methods Type of kernel
I PKMR Non-sequence (described in Sect. 2.2.1)
II PKMR Sequence (described in Sect. 2.2.2)
III PKMR Combined sequence and non-sequence (described in Sect. 2.2.3)
IV pSVM Non-sequence (described in Sect. 2.2.1)
V pSVM Sequence (described in Sect. 2.2.2)
VI pSVM Combined sequence and non-sequence (described in Sect. 2.2.3)

For evaluation, we used the area under the ROC curve (AUC score) (Gribskov and Robinson 1996) to measure the accuracy. It was defined as a function of the rates of true-positives (predicted enzymes pairs were present in the data set) and false-positives (predicted protein pairs were absent in the data set). A stratified cross-validation procedure was used with fold equal to 10 (tenfold cross-validation) (Kohavi et al. 1995). We also collected execution times (Time s). All experiments were run using a computer with a microprocessor Intel i7CORE and RAM memory of 8 MB.

In addition, we computed the 95 % confidence intervals (CIs) for average AUC scores. We used a distribution-independent technique proposed by Cortes and Mohri (2005). As they described, the variance depends on the number of positive and negative examples in the training set and the number of errors during the classification process. In our case, the training set consisted in 2575 positive and 2574 negative interactions, out of 5149 total interactions. The errors in the classification process ranged between 750 and 1851.

Results and discussion

Comparing data

When we compare sequence and non-sequence data, within the same supervised learning method, better accuracy values are obtained with the sequence kernel (see Table 2, Experiments II–PKMR–Sequence and V–pSVM–Sequence). For example, in the PKMR method, the accuracy value is improved from AUC = 0.503 (the lowest value in experiment I–PKMR–Non-Sequence kernel) to AUC = 0.821 (the highest value in experiment II–PKMR–Sequence kernel). This proves our hypothesis about better accuracy values when sequence kernels are used, since errors from the genome annotation process are bypassed. However, the execution times for the methods using sequence kernels are more than doubled when they are compared with non-sequence kernels (i.e., Time = 240 s—the lowest time in experiment I–PKMR–Non-Sequence versus Time = 530 s—the highest time in experiment II–PKMR–Sequence). This is because computing sequence kernels (i.e., k pfam , k motif, and k mer) consume more computational resources.

Table 2.

Results collected during the experiments

Experiment Predictor kernel AUC
score
Time
s
Confidence
intervals
I–PKMR–Non-Sequence k exp 0.660 300 [0.655, 0.665]
k loc 0.503 240 [0.499, 0.507]
k phy 0.775 240 [0.771, 0.779]
k exp + k loc + k phy 0.755 350 [0.752, 0.759]
w 1 k exp + w 2 k loc + w 3 k phy 0.799 420 [0.791, 0.807]
II–PKMR–Sequence k pfam 0.797 450 [0.793, 0.801]
k motif 0.782 430 [0.778, 0.786]
k mer 0.725 420 [0.720, 0.731]
k pfam + k motif + k mer 0.817 480 [0.811, 0.823]
w 4 k pfam + w 5 k motif + w 6 k mer 0.821 530 [0.818, 0.824]
III–PKMR–Combined
(sequence and non-sequence)
k phyk pfam 0.812 470 [0.809, 0.816]
k exp + k loc + k phy + k pfam + k motif + k mer 0.831 610 [0.828, 0.834]
w 1 k exp + w 2 k loc + w 3 k phy + w 4 k pfam + w 5 k motif + w 6 k mer 0.840 720 [0.831, 0.849]
IV–pSVM–Non-Sequence k exp 0.791 9020 [0.786, 0.796]
k loc 0.696 7800 [0.692, 0.700]
k phy 0.802 7980 [0.797, 0.807]
k exp + k loc + k phy 0.818 10,100 [0.812, 0.824]
w 1 k exp + w 2 k loc + w 3 k phy 0.877 10,121 [0.871, 0.883]
V–pSVM–Sequence k pfam 0.887 12,060 [0.879, 0.895]
k motif 0.868 12,000 [0.859, 0.877]
k mer 0.840 11,760 [0.836, 0.844]
k pfam + k motif + k mer 0.898 12,220 [0.891, 0.905]
w 4 k pfam + w 5 k motif + w 6 k mer 0.910 12,800 [0.901, 0.919]
VI–pSVM–Combined
(Sequence and non-sequence)
k phy + k pfam 0.890 12,100 [0.882, 0.898]
k exp + k loc + k phy + k pfam + k motif + k mer 0.939 13,420 [0.935, 0.944]
w 1 k exp + w 2 k loc + w 3 k phy + w4 k pfam + w5 k motif + w6 k mer 0.940 14,010 [0.934, 0.946]

These are AUC score (area under the ROC curve as accuracy), time s (Execution times in seconds), and confidence intervals

The best results are obtained with the kernels that represent the combined heterogeneous data within the same supervised learning method, i.e., Table 2, experiment III–PKMR–Combined and VI–pSVM–Combined. In this case, for the PKMR method, the accuracy is improved from AUC = 0.797 (i.e., the best accuracy using the simple kernel—k pfam in Experiment II) to AUC = 0.840 (i.e., weighted kernel: w1kexp+w2kloc+w3kphy+w4kpfam+w5kmotif+w3kmer in Experiment III). Likewise, when we use only the Pfam—k pfam kernel (AUC = 0.797 in experiment II), the best accuracy is obtained. Then, we also test this simple sequence kernel combined with other simple kernels. The best results are obtained combining the Pfam—k pfam and phylogenetic—k phy kernels (see the accuracy values of k phy + k pfam kernel in Table 2, experiment III–PKMR–Combined and VI-pSVM–Combined). This result coincides with Allauzen et al. (2008), where they stated “the importance of the phyletic retention feature as a possible reason for the superior performance of the combined kernel compared with Pfam alone”.

Comparing methods

As can be seen in Table 2, pSVM methods (experiment IV–pSVM–Non-Sequence, V–pSVM–Sequence, and VI–pSVM–Combined) outperform the precision values of PKMR method (experiment I–PKMR–Non-Sequence, II–PKMR–Sequence and III–PKMR–Combined). For example, using the PKMR method, the AUC score for k exp (Experiment I–PKMR–Non-Sequence) is 0.660 compared to 0.791 (experiment III–PKMR–Combined). However, the execution times are considerably increased for pSVMs (see Table 2 Times values for experiments I–PKMR–Non-Sequence and II–PKMR–Sequence in comparison with experiments IV–pSVM–Non-Sequence and V–pSVM–Sequence). Processing pSVM involves more computational resources than PKMR methods; however, better accuracy values are obtained. In all the cases, the confidence intervals are above the behaviour of a random classifier.

Figure 1 represents the results for both methods (PKMR and pSVM) using only sequence kernels. Although the most time consuming method is pSVM, it provides an important improvement in the accuracy values [the peaks are reached combining the sequence kernel (k pfam + k motif + k ngram) and pSVM method]. Yamanishi (2010) mentions these expected high processing times for SVM methods, but never tested them to evaluate how the accuracy could be improved. Roche-Lima et al. (2014) use a different representation of the sequence kernel and decrease execution time of the sequence kernel computation; however, they still use existing SVM methods. In addition, the accuracy values obtained in this research are better than the values reported by Roche-Lima et al. (2014). Thus, we consider that pSVM implementation can be optimized to obtain better processing time and to maintain these good accuracy values. Likewise, sequence kernel representations can be also optimized to combine with pSVM methods to improve both performance and accuracy.

Fig. 1.

Fig. 1

Comparison of the methods (PKMR—Penalized Kernel Matrix Regression and pSVM—pairwise Support Vector Machine) for the sequence data kernels, related to accuracy and execution times

Conclusion

We developed, for the first time, experiments using sequence data with PKMR and pSVM methods to predict metabolic networks. We proved that the best accuracy values were obtained using sequence kernels. This was because other tools to predict metabolic networks were based on the gene annotations (Latendresse et al. 2012; Karp et al. 2011). As we used raw sequence data (represented as sequence kernels), it bypassed the annotations and the errors associated with these steps.

We also proved that pSVM methods were more precise than PKMR methods. The best accuracy values were obtained when pSVM methods were combined with sequence kernels. However, pSVM methods were very expensive in terms of computational resources, such as execution times. pSVM methods required even more computational resources when using sequence kernels.

In future works, pSVM method can be optimized using other implementations, such as Dual Coordinate Descent algorithm combined with rational kernels to manipulate sequence data (Allauzen et al. 2011). As well, a parallel implementation could be used to improve performance (Tyree et al. 2014).

Acknowledgments

I would like to acknowledge Dr. Michael Domaratzki and Dr. Brian Fristensky from the University of Manitoba, Canada, for their guidance and suggestions during the development of this research. This work was partially funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) at the University of Manitoba, Canada), and RCMI Grant G12 MD007600 (National Institute on Minority Health and Health Disparities) from the National Institutes of Health, at the University of Puerto Rico.

Compliance with ethical standards

Conflict of interest

The author declares that there is no conflict of interest regarding the publication of this manuscript.

References

  1. Allauzen C, Mohri M, Talwalkar A (2008) Sequence kernels for predicting protein essentiality. In: Proceedings of the 25th international conference on machine learning ICML ‘08. 9–16. ACM New York, NY, USA
  2. Allauzen C, Cortes C, Mohri M. A dual coordinate descent algorithm for SVMs combined with rational kernels. Int J Found Comput Sci. 2011;22:1761–1779. doi: 10.1142/S0129054111009021. [DOI] [Google Scholar]
  3. Basilico J, Hofmann T (2004) Unifying collaborative and content based filtering. In: Brodley C (ed) Proceedings of the twenty-first international conference on machine learning ACM p 9
  4. Ben-Hur A, Brutlag D. Remote homology detection: a motif based approach. Bioinformatics. 2003;19:i26–i33. doi: 10.1093/bioinformatics/btg1002. [DOI] [PubMed] [Google Scholar]
  5. Ben-Hur A, Noble WS. Kernel methods for predicting protein–protein interactions. Bioinformatics. 2005;21:i38–i46. doi: 10.1093/bioinformatics/bti1016. [DOI] [PubMed] [Google Scholar]
  6. Ben-Hur A, Ong CS, Sonnenburg S, Scholkopf B, Ratsch G. Support vector machines and kernels for computational biology. PLoS Comput Biol. 2008;4:e1000173. doi: 10.1371/journal.pcbi.1000173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brunner C, Fischer A, Luig K, Thies T. Pairwise support vector machines and their application to large scale problems. J Mach Learn Res. 2012;13:2279–2292. [Google Scholar]
  8. Chang CC, Lin CJ. LibSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2011;2:27. [Google Scholar]
  9. Cortes C, Mohri M. Confidence intervals for the area under the ROC curve. Advances in neural information processing systems. UK: Curran Associates; 2005. p. 305. [Google Scholar]
  10. Eisen MB, Spellman PT, Brown PO, Botstein D. Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci. 1998;95:14863–14868. doi: 10.1073/pnas.95.25.14863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Gomez SM, Noble WS, Rzhetsky A. Learning to predict protein–protein interactions from protein sequences. Bioinformatics. 2003;19:1875–1881. doi: 10.1093/bioinformatics/btg352. [DOI] [PubMed] [Google Scholar]
  12. Gribskov M, Robinson NL. Use of receiver operating characteristic (ROC) analysis to evaluate sequence matching. Comput Chem. 1996;20:25–33. doi: 10.1016/S0097-8485(96)80004-0. [DOI] [PubMed] [Google Scholar]
  13. Huang JY, Brutlag DL. The emotif database. Nucleic Acids Res. 2001;29:202–204. doi: 10.1093/nar/29.1.202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Kanehisa M, Araki M, Goto S, Hattori M, Hirakawa M, Itoh M, Katayama T, Kawashima S, Okuda S, Tokimatsu T, et al. KEGG for linking genomes to life and the environment. Nucleic Acids Res. 2008;36:D480–D484. doi: 10.1093/nar/gkm882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Karp PD, Latendresse M, Caspi R. The pathway tools pathway prediction algorithm. Stand Genom Sci. 2011;5:424–429. doi: 10.4056/sigs.1794338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Kashima H, Oyama S, Yamanishi Y, Tsuda K. Cartesian kernel: an efficient alternative to the pairwise kernel. IEICE Trans Inf Syst. 2010;93:2672–2679. doi: 10.1587/transinf.E93.D.2672. [DOI] [Google Scholar]
  17. Kohavi R, et al. A study of cross-validation and bootstrap for accuracy estimation and model selection. IJCAI. 1995;14:1137–1145. [Google Scholar]
  18. Kotera M, Yamanishi Y, Moriya Y, Kanehisa M, Goto S. GENIES: gene network inference engine based on supervised analysis. Nucleic Acids Res. 2012;40(W1):162–167. doi: 10.1093/nar/gks459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Kotera M, Tabei Y, Yamanishi Y, Tokimatsu T, Goto S. Supervised reconstruction of metabolic pathways from metabolome-scale compound sets. Bioinformatics. 2013;29:i135–i144. doi: 10.1093/bioinformatics/btt244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Latendresse M, Paley S, Karp PD (2012) Browsing metabolic and regulatory networks with BioCyc. In: van Helden J (ed) Bacterial molecular networks. Springer, New York, pp 197–216 [DOI] [PMC free article] [PubMed]
  21. Leslie CS, Eskin E, Cohen A, Weston J, Noble WS. Mismatch string kernels for discriminative protein classification. Bioinformatics. 2004;20:467–476. doi: 10.1093/bioinformatics/btg431. [DOI] [PubMed] [Google Scholar]
  22. Luo B, Groenke K, Takors R, Wandrey C, Oldiges M. Simultaneous determination of multiple intracellular metabolites in glycolysis, pentose phosphate pathway and tricarboxylic acid cycle by liquid chromatography–mass spectrometry. J Chromatogr A. 2007;1147(2):153–164. doi: 10.1016/j.chroma.2007.02.034. [DOI] [PubMed] [Google Scholar]
  23. Oyama S, Manning CD (2004) Using feature conjunctions across examples for learning pairwise classifiers. In: Boulicaut J-F (eds) European conference on machine learning. Springer, Berlin, Heidelberg, pp 322–333
  24. Pahikkala T, Airola A, Stock M, De Baets B, Waegeman W. Efficient regularized least-squares algorithms for conditional ranking on relational data. Machine Learning. 2012;93:321–356. doi: 10.1007/s10994-013-5354-7. [DOI] [Google Scholar]
  25. Punta M, Coggill PC, Eberhardt RY, Mistry J, Tate J, Boursnell C, Pang N, Forslund K, Ceric G, Clements J, et al. The Pfam protein families database. Nucleic Acids Res. 2012;40:D290–D301. doi: 10.1093/nar/gkr1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. R Core Team . R: a language and Environment for Statistical Computing. Austria: R Foundation for Statistical Computing Vienna; 2013. [Google Scholar]
  27. Roche-Lima A, Domaratzki M, Fristensky B. Metabolic network prediction through pairwise rational kernels. BMC Bioinformat. 2014;15:318. doi: 10.1186/1471-2105-15-318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Scholkopf B, Smola AJ. Learning with kernels: support vector machines, regularization, optimization and beyond. Cambridge: MIT Press; 2002. [Google Scholar]
  29. Sikorski RS, Hieter P. A system of shuttle vectors and yeast host strains designed for efficient manipulation of DNA in Saccharomyces cerevisiae. Genetics. 1989;122:19–27. doi: 10.1093/genetics/122.1.19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Spellman PT, Sherlock G, Zhang MQ, Iyer VR, Anders K, Eisen MB, Brown PO, Botstein D, Futcher B. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol Biol Cell. 1998;9(3273):97. doi: 10.1091/mbc.9.12.3273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Tyree S, Gardner JR, Weinberger KQ, Agrawal K, Tran J (2014) Parallel support vector machines in practice. arXiv preprint arXiv:1404.1066
  32. Vert J-P, Qiu J, Noble W. A new pairwise kernel for biological network inference with support vector machines. BMC Bioinformat. 2007;8:S8. doi: 10.1186/1471-2105-8-S10-S8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Yamanishi Y. Supervised inference of metabolic networks from the integration of genomic data and chemical information. Elem Comput Syst Biol. 2010;8:189. doi: 10.1002/9780470556757.ch8. [DOI] [Google Scholar]
  34. Yamanishi Y, Vert JP (2007). Kernel matrix regression. arXiv preprint q-bio/0702054
  35. Yamanishi Y, Vert J, Kanehisa M. Protein network inference from multiple genomic data: a supervised approach. Bioinformatics. 2004;20:i363–i370. doi: 10.1093/bioinformatics/bth910. [DOI] [PubMed] [Google Scholar]
  36. Yamanishi Y, Vert J, Kanehisa M. Supervised enzyme network inference from the integration of genomic data and chemical information. Bioinformatics. 2005;21:i468–i477. doi: 10.1093/bioinformatics/bti1012. [DOI] [PubMed] [Google Scholar]
  37. Yu J, Guo M, Needham CJ, Huang Y, Cai L, Westhead DR. Simple sequence-based kernels do not predict protein–protein interactions. Bioinformatics. 2010;26:2610–2614. doi: 10.1093/bioinformatics/btq483. [DOI] [PubMed] [Google Scholar]

Articles from Network Modeling and Analysis in Health Informatics and Bioinformatics are provided here courtesy of Springer

RESOURCES