Skip to main content
MethodsX logoLink to MethodsX
. 2022 Jan 15;9:101622. doi: 10.1016/j.mex.2022.101622

Deep learning program to predict protein functions based on sequence information

Chang Woo Ko a,b, June Huh c, Jong-Wan Park a,b,
PMCID: PMC8790617  PMID: 35111575

Highlights

  • A new deep learning program to predict protein functions in silico.

  • Requirement of nothing more than the protein sequence information.

  • A sequence segmentation to improve the efficiency of prediction.

  • Prediction of the clinical impact of mutations or polymorphisms.

Keywords: Deep learning, Sequence segmentation, Protein functions, Point mutation

Abstract

Deep learning technologies have been adopted to predict the functions of newly identified proteins in silico. However, most current models are not suitable for poorly characterized proteins because they require diverse information on target proteins. We designed a binary classification deep learning program requiring only sequence information. This program was named ‘FUTUSA’ (function teller using sequence alone). It applied sequence segmentation during the sequence feature extraction process, by a convolution neural network, to train the regional sequence patterns and their relationship. This segmentation process improved the predictive performance by 49% than the full-length process. Compared with a baseline method, our approach achieved higher performance in predicting oxidoreductase activity. In addition, FUTUSA also showed dramatic performance in predicting acetyltransferase and demethylase activities. Next, we tested the possibility that FUTUSA can predict the functional consequence of point mutation. After trained for monooxygenase activity, FUTUSA successfully predicted the impact of point mutations on phenylalanine hydroxylase, which is responsible for an inherited metabolic disease PKU. This deep-learning program can be used as the first-step tool for characterizing newly identified or poorly studied proteins.

  • We proposed new deep learning program to predict protein functions in silico that requires nothing more than the protein sequence information.

  • Due to application of sequence segmentation, the efficiency of prediction is improved.

  • This method makes prediction of the clinical impact of mutations or polymorphisms possible.

Graphical abstract

Image, graphical abstract


Specifications table

Subject Area: Biochemistry, Genetics and Molecular Biology
More specific subject area: Protein Function Prediction
Method name: FUTUSA (function teller using sequence alone)
Name and reference of original method: There is no original method
Resource availability: https://github.com/snuhl-crain/FUTUSA

Overview

Advances in sequencing technologies help us discover a huge amount of protein variants that are generated from alternative mRNA splicing, incomplete translation, single amino acid polymorphism, or gene fusion. These mutations can lead to the expansion of protein functions, including protein-protein interactions, ligand binding, subcellular localization, or completely novel functions. Evolutionally, the sequence variations may be developed to overcome the limitation of gene numbers.

We designed a deep learning program for protein function prediction, using only amino acids and named it ‘FUTUSA’ (function teller using sequence alone). Compared with other baseline method, FUTUSA achieved a better performance in protein function prediction. In predicting rare GO terms, particularly, FUTUSA improved the efficiency of classification performance. We also successfully predict the contribution of each amino acid to protein function. Therefore, FUTUSA could detect the functional motifs in the case of newly identified proteins and may also help us predict the critical impact of some clinically identified mutations in disease progression.

Methods

Data source and preprocessing

To minimize human cognitive biases in machine training steps, we only used GO annotation data and protein sequence data. We downloaded UniProtKB/Swiss-Prot database (version 2020_02) (https://www.uniprot.org/downloads) [1]. The datasets included 17,965 human proteins and 6,195 yeast proteins. The hierarchical information of GO terms and GO annotation data was obtained from Gene Ontology Consortium (http://geneontology.org) [[2], [3]]. We here used a GO OBO file (released on Sep 19th in 2018) containing 47,343 GO terms. The performance comparison was done with a previous GO file (released on Jan 1st 2016) containing 43,937 terms. The downloaded human and yeast GO annotation data files (released on Feb 2nd 2020) contained 495,361 annotations for human and 120,936 annotations for yeast. In this research, we used all evidence codes, including experimental, phylogenetically-inferred, and computational analyses. To compare ours with other prediction models, we used only EXP, IDA, IPI, IMP, IGI, IEP, TAS, and IC, which were considered as experimental evidence codes according to CAFA3 [4]. Using the datasets, we narrowed down the training target functions and constructed target specific dataset. For instance, if we picked one GO term such as “oxidoreductase activity”, we labeled all proteins that were annotated the GO term and its ‘is_a’ relation descendants such as “sulfur reductase activity”. We used protein segments composed of 16, 32, 64, or 128 amino acids for machine training. With this approach, we excluded zero padding step for fixing protein sequence information and reduced the file sizes of training data. We padded (size of segmentation-1) zeros at the N-terminus and (size of segmentation) zeros at the C-terminus of the sequence to give same weight to all amino acids except N-terminus methionine. Also, we didn't omit ‘ambiguous’ amino acids and considered all kinds of amino acids in sequence data. To reduce training time, however, we limited the maximum protein length catching up with each training purpose.

Deep learning architecture

We built the deep learning neural network models using Tensorflow2.0 framework and implemented them with the Keras deep learning library [5]. We used cloud-based environment, Google Colaboratory (https://colab.research.google.com) and trained the neural network models on GPUs offered by Google Cloud [6]. The models are organized with four subdivided sections, embedding layers, feature extraction, dense layers, and scoring steps. The first embedding layers convert preprocessed sequence data to numerical vector space that can be fed to the neural network. One hot encoding, one of the most common encoding methods, represent each amino acid with binary vector. This approach is straightforward and able to preserve original amino acid information, but it can cause the excessive complexity of model and cannot concern about physiochemical properties of amino acids. Recently, several research groups try to use machine-learning based encoding methods instead of manually define methods to solve the problems [7]. In this study, we used 1 × 1 CNN to generate amino acid embedding vector. And next, we extracted and learned the spatial features using CNN. The dense layers calculate the weight of previously generated feature map without concerning the topology, improving the recognition of distant features and concerns of their combinations for the classification. Lastly, the final output of binary classification from the dense layer are assumed as a predictive score of input segment and the total score for the input protein sequence is computed. We used the one-dimensional convolutional neural network (Conv1D) to extract the protein sequence-derived features. In Conv1D, 1D array-like kernel slides along one dimension and identify the patterns from sequence information. Conv1D has been widely used in sequence-based protein function prediction techniques [8], [9], [10], [11]. ‘FUTUSA’ was built to get more flexible at the feature extraction steps. We added 1 × 1 CNN after the one-hot encoding to make the model also learn how to understand each amino acid [12]. Using 1D convolution neural network with max pooling layer, it extracted the features without reducing its size. This model used variously sized convolution kernels, 2 and its geometric progressions with common ratio 2 smaller than its segmentation size and prime number just prior every geometric series. The sequence-derived feature vectors were batch-normalized and activated by Rectified-linear-unit (ReLU) function [13,14]. Through the concatenation layer and flatten layer, the feature map was fed to fully connected layers. All of the intermediate hidden layers were batch-normalized and performed dropout regularization (p = 0.2) [15]. We activated intermediate hidden layers with the ReLU function and classification layer with Sigmoid function. The characteristics of the models are shown in Table S1.

Datasets and model training

For the evaluation purpose, we randomly extracted 5% of the total protein data before the training. Next, we validated the model during the training by using five-fold cross validation. We randomly allocated 20% of the remaining 95% protein data as a validation dataset and the 80% as a training dataset. For a case study, we picked the specific target protein and set it as evaluation data. In our models, the deep learning system was trained with Adam optimizer at a learning rate of 0.001 [16]. We used weighted binary cross entropy as a loss function to address imbalanced dataset issue of rare protein function prediction problem [17]. The weighted binary cross entropy loss LwBCE can be written as

LwBCE(y,y^)=((1w)ylog(y^)+w(1y)log(1y^)) (1)

Where w is the ratio of number of labeled proteins to total trained proteins, yis the real label and y^is the predicted value. This loss function is designed to give more weight to relatively rarely labeled functions. The mini batch size was set to 128. To avoid overfitting, we trained deep learning models for varied epochs from 10 to 100 but less than 12 h and adopted early stopping strategy. The characteristics of used datasets are shown in Table S2.

Comparison methods

To compare our models with others in predictive performance, we adopted CAFA baseline method, BLAST. BLAST model predicts protein function based on sequence identity. We made some minor changes in this model to predict a specific protein function because CAFA challenge originally aims to explore general functions of proteins. We made database with training dataset and found similar sequences with queried target proteins using BLASTP software [18]. The BLAST results contained the target training proteins after they were filtered by E-value of E-5.

Performance evaluation

We evaluated the predictive performance through average precision (AP), F1-score (F1), area under receiver operating characteristic curve (AUROC), area under precision-recall curve (AUPR) and Matthews correlation coefficient (MCC) [[19], [20], [21]]. The MCC, widely used confusion matrix describing method, MCCt at threshold t can be computed directly from the confusion matrix using the following formula:

MCCt=(TPt·TNt)(FPt·FNt)(TPt+FPt)(TPt+FNt)(TNt+FPt)(TNt+FNt) (2)

Where TPt is the number of true positives; TNt is the number of true negatives; FPt is the number of false positives; FNt is the number of false negatives at threshold t.

F1-score is harmonic average of the precision and the recall. The precision Prtand the recall Rctat threshold t can be written as

Prt=TPtTPt+FPt (3)
Rct=TPtTPt+FNt (4)

Hence, F1-score F1,t can be calculated by the following formulas:

F1,t=2Prt1+Rct1=2·Prt·RctPrt+Rct=2·TPt2·TPt+FPt+FNt (5)

In this study, we presented the maximum evaluation value for all thresholds computed with a step size 0.01 unless otherwise noted. The F1-score and the MCC score are visualized with the MCC-F1 curve to address the full range of all possible thresholds [22]. AP, AUROC, and AUPR are independent of the threshold. The receiver operating characteristic (ROC) curve is a plot of true positive rate (recall) versus false positive rate and precision-recall curve is a plot of precision versus recall. The AP is also the method to summarize the precision-recall curve. The true positive rate TPRt and false positive rate FPRt at threshold t can be written as

TPRt=TPtTPt+FNt (6)
FPRt=FPtTNt+FPt (7)

Hence, AUROC, AUPR, AP can be calculated by the following formulas:

AUROC=TPRt·FPRtdt (8)
AUPR=Prt·Rctdt (9)
AP=n(RcnRcn1)·Prn (10)

Where Prn and Rcn are the precision and recall at the nth threshold.

Validation and discussion

Sequence segments-based training

First, we tested whether the sequence segmentation method improves the predictive performance of protein functions. We compared the segmentation model with the full-length model that uses a zero-padding layer to fit the length of the protein sequences. The fully connected layers were unaltered in this model. We trained FUTUSA to predict oxidoreductase activity (GO:0016491) and its descendants, which were 2371 terms. We used two different scoring methods to determine the score of each protein. One used the average for the predictive scores of all segments from the protein, and the other did the average for only the highest-scored segments in the top 10% to reduce the error from the protein lengths and non-functional residues. Because the performances of the methods depend on the context, we selected the better scoring method whenever the model training was completed. In all five metrics (AP, F1, AUROC, AUPR, and MCC), the sequence segmentation improved the classification performance regardless of the size of segmentation and CNN architecture. Fig. 1. displays the MCC-F1 curve, ROC curve, and P-R curve for various segmentation sizes. All the sequence segmentations were found to be better than the full-length process. Of these segmentations, the 64 amino acid segmentation was shown as the best setting. The overall results are summarized in Table 1. The results revealed that the machine learning with sequence segments significantly improves the ability of protein feature recognition.

Fig. 1.

Fig 1

The MCC-F1, ROC, P-R curves of all tested sequence segmentation sizes. The MCC-F1 (a), ROC (b), P-R (c) curves (FL: full-length model (blue); 16: segmentation size 16 model (green); 32: segmentation size 32 model (orange); 64: segmentation size 64 model (red); 128: segmentation size 128 model (black).

Table 1.

The overall evaluation results for all tested sequence segmentation sizes.

Model AP MCC F1 AUPR AUROC
FUTUSA_FL 0.3089 0.3863 0.3421 0.3058 0.7525
FUTUSA_16 0.4604 0.4764 0.4533 0.4576 0.8754
FUTUSA_32 0.4661 0.4413 .0.4494 0.4631 0.8872
FUTUSA_64 0.5158 0.5186 0.5319 0.5129 0.8835
FUTUSA_128 0.4612 0.5406 0.5135 0.4592 0.7671

Predictive performance comparison

To verify our approach, we compared our model with the baseline method BLAST. We selected oxidoreductase activity (GO:0016491, 721 annotated proteins) as a representative case for the big-sized dataset (>100 proteins), and the acetyltransferase activity (GO:0016407, 81 annotated proteins) and demethylase activity (GO:0032451, 28 annotated proteins) as the cases for the small-sized datasets. We trained FUTUSA with 64 amino acid segmentation. Table 2 shows the predictive performances of various models on the three different datasets. In case of oxidoreductase, FUTUSA obtained AP = 0.4319, MCC = 0.4508, and F1 = 0.4528, which are higher than those in BLAST. We here adopted acetyltransferase and demethylase as the cases for small dataset size. FUTUSA showed higher values of AUPR (0.3166 for acetyltransferase; 0.3297 for demethylase) than BLAST model. Consequently, we propose FUTUSA as a powerful protein function-predicting program applicable to either big-sized or small-sized dataset.

Table 2.

The performance comparison of the competing models on the oxidoreductase activity (GO:0016491), acetyltransferase activity (GO:0016407) and demethylase activity (GO:0032451) datasets.

Programs AP MCC F1 AUPR AUROC
Oxidoreductase BLAST 0.1509 0.3386 0.3014 - -
FUTUSA 0.4319 0.4508 0.4528 0.4272 0.8136
Acetyltransferase BLAST 0.0649 0.1818 0.2374 - -
FUTUSA 0.3212 0.4444 0.5331 0.3166 0.7587
Demethylase BLAST 0.1521 0.3529 0.3826 - -
FUTUSA 0.3486 0.5000 0.5145 0.3297 0.6906

Calculation of area under ROC curve and PR curve does not assess the performance of binary predictor, BLAST model.

Prediction of the functional impacts of mutations in phenylalanine hydroxylase

To explore a new application of FUTUSA, we investigated whether it could predict the functional consequence of single amino acid variation. To evaluate the contribution of each amino acid to functional property, we computed the average values of all segments in phenylalanine hydroxylase (PAH), which is responsible for an inherited metabolic disease PKU. PAH, which belongs to the monooxygenase family, is a homo-tetrameric enzyme composed of an N-terminal regulatory domain, a central catalytic domain, and a C-terminal tetramerization domain [23]. Clinically, it is very important to predict the functional consequences of the PAH mutations in patients’ specimens. Such an information may be one of guiding factors to decide how to care PKU patients. To predict PAH activity, we trained FUTUSA with new dataset comprising 114 monooxygenases. The trained model successfully predicted the catalytic domain as a crucial region and assigned low scores to regulatory and tetramerization domains (Fig. 2a). The N-terminus is classified as the ACT domain, which is widely expressed in enzymes of amino acid metabolism. Notably, this region obtained a low score even though there were several identical residues with other aromatic amino acid hydroxylases. It may be interpreted that this domain is not essential for monooxygenase activity. To visualize the prediction results, we highlighted the high score region in the 3D structure of human PAH protein using CAVER Analyst 2.0 [[24], [25], [26]]. Previous studies reported that phenylalanine forms hydrophobic interactions with R270, T278, P279, F331, G346, G349, and S350. The thiophene ring of the substrate was stacked against the imidazole ring of H285 [25,27]. The high score region of the model was close to the active site of human PAH. Fig. 2b shows that they covered most of the substrate-binding residues and iron-binding residues. The low score regions do not contact the substrate and are located far from the active site. In particular, the prediction result of FUTUSA covered the active site lid, Y138 [28].

Fig. 2.

Fig 2

The heatmap visualization of predicted functional contribution score of individual amino acids. The predicted scores of FUTUSA are also overlaid onto crystal structure of the full-length human PAH (residues 21–446; PDB:6N1K) and catalytic domain of human PAH (residues 117-428; PDB:1MMK). (a) The heatmap is mapping with green as low predictive score and red as high predictive score. (b) The iron ion (cyan) is highlighted in balls. The substrate analogue, beta-2-Thienylalanine (THA; yellow) and amino acid residues of binding pockets (gray and brown) are presented as sticks. The prediction was performed with Phenylalanine-4-hydroxylase for monooxygenase activity.

Next, we computed the change in the predictive score after single amino acid deletion and substitution. There are 100 or more PAH mutations (missense and frameshift) that can lead to phenylketonuria (PKU), as reported in the ClinVar database [29]. These mutations present a broad range of phenotypic variations depending on the residual enzymatic activities. Hence, we verified the predictive performance of the full-length models and segmentation models with the mutated sequences. As a result, FUTUSA_FL (full-length) failed to predict the outcomes of single amino acid changes (Fig. 3a). However, the segmentation models composed of 16 (Fig. 3b) and 64 (Fig. 3c) amino acids were able to compute remarkable score changes after mutation occurs. FUTUSA_16 showed score drops only in several crucial regions, but FUTUSA_64 marked crucial regions more widely. The regulatory and oligomerization domains showed trivial score changes in both segmentation models. FUTUSA_64 showed that many substitutions of amino acids in the catalytic domain decreased the predictive score, but those in exons 4 and 11 did not substantially (Fig. 3c). Numbers of missense variants and exon deletions, related to loss of function and phenylketonuria, were detected in the catalytic domain [[30], [31]]. These results reveal that the functional sites predicted by FUTUSA are well matched with the known functional domains.

Fig. 3.

Fig 3

The heatmap of the score changes by single amino acid mutations. Protein function changes by point mutations were predicted using FUTUSA_FL (a), FUTUSA_16 (b), FUTUSA_64 (c) The color indicates the score changes after mutation, blue as decreased score and red ad increased score. Each column represents the position of the amino acid and each row represents the changed amino acid after mutation. The first row, del, indicates the deletion of the amino acid. The prediction was performed with Phenylalanine-4-hydroxylase for monooxygenase activity.

Discussion

We here propose a deep learning-based protein function predictor, which is named FUTUSA. Since it requires only the sequence information, FUTUSA is suitable for preliminary characterization of newly found proteins or uncharacterized variants. We also established a preprocessing method for sequence segmentation, which effectively extracts functional features from protein sequence data and augments the performance of function prediction. FUTUSA showed a better performance than the baseline method, BLAST. Therefore, we propose FUTUSA as a new deep learning program predicting the functions of uncharacterized proteins. Additionally, FUTUSA distinguished functionally essential amino acids with nonessential ones, suggesting that FUTUSA could be used for predicting the clinical impact of point mutations or single amino acid polymorphisms.

Generally, protein function prediction has a fundamental problem resulting from training with imbalanced dataset. For instance, a number of oxidoreductase proteins is 719, which is only 3.4% of total proteins trained in FUTUSA. It means that the machine was trained more with irrelevant proteins (as negative controls) than with target proteins. Especially in the small-sized dataset such as acetyltransferase and demethylase, this class imbalance in training raises the false-positive rate to a greater extent. Reviewing the equations for AUPR and AUROC, AUROC is substantially affected by the false-positivity, whereas AUPR does not [32]. In case of the small-sized dataset, therefore, AUPR may evaluate the efficiency of function prediction more accurately than AUROC. For this reason, we emphasize the AUPR values (Table 2).

It should be noted that FUTUSA has some limitations in the data process. The sequence segmentation process increases the input data size, thereby increasing the training time. Thus, the segmentation should be optimized to achieve a good balance between the predictive performance and training time. Also, since FUTUSA is not a ready-to-use predictor, users should modify training dataset, optimize preprocessing parameters, and train it according to their purposes.

One of the reasons for the delay in the development of deep learning-based protein function predictors is that the performance of deep learning models depends on the quality and quantity of training data. Therefore, some researchers used diverse protein features, such as protein-protein interactions or protein motifs [8,9,33]. However, the features are available for extensively studied proteins, but not for uncharacterized proteins. To solve such a problem, many researchers have tried to develop new protein function predictors requiring only amino acid sequences [10,34,35]. Despite many efforts, the sequence-alone approaches have not been satisfactory to users. In this context, FUTUSA is proposed as a new method to meet users’ needs.

In a future work, we plan to modify the preprocessing process. In the present study, we assigned the same weight to all segmented sequences, even if the sequences insignificantly contributed to protein function. It is expected that assigning different weights to each segment can minimize artificial biases. For that, we should add a step to re-evaluate how much each segment contributes to the whole function. In addition, the model should be changed to a multi-class classification model. This version of the model was built as a single-class classification model to focus its predictive ability in classifying the protein function. However, it is evident that the hierarchical structure of GO terms and the corresponding annotated patterns also contain important information. Therefore, we will intend to find the GO terms grouping method to optimally predict a single GO term.

All the source code and datasets are available at https://github.com/snuhl-crain/FUTUSA.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The authors acknowledge the contribution of all the investigators at the participating study sites. This work was supported by the National Research Foundation of Korea (2019R1A2B5B03069677).

Footnotes

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.mex.2022.101622.

Appendix. Supplementary materials

mmc1.docx (16.4KB, docx)

References

  • 1.Bateman A. UniProt: a worldwide hub of protein knowledge. Nucleic Acids Res. 2019;47(D1):D506–D515. doi: 10.1093/nar/gky1049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ashburner M., Ball C.A., Blake J.A., Botstein D., Butler H., Cherry J.M., et al. Gene ontology: tool for the unification of biology. Nat. Genet. 2000;25(1):25–29. doi: 10.1038/75556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Carbon S., Douglass E., Dunn N., Good B., Harris N.L., Lewis S.E., et al. The gene ontology resource: 20 years and still GOing strong. Nucleic Acids Res. 2019;47(D1):D330–D338. doi: 10.1093/nar/gky1055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zhou N., Jiang Y., Bergquist T.R., Lee A.J., Kacsoh B.Z., Crocker A.W., et al. The CAFA challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens. Genome Biol. 2019;20(1):1–23. doi: 10.1186/s13059-019-1835-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Abadi M., Barham P., Chen J., Chen Z., Davis A., Dean J., et al. Proceedings of the 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) [Internet] {USENIX} Association; Savannah, GA: 2016. TensorFlow: a system for large-scale machine learning; pp. 265–283.https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi Available from. [Google Scholar]
  • 6.Carneiro T., Da Nobrega R.V.M., Nepomuceno T., Bin B.G., De Albuquerque V.H.C., Filho P.P.R. Performance analysis of google colaboratory as a tool for accelerating deep learning applications. IEEE Access. 2018;6:61677–61685. [Google Scholar]
  • 7.Jing X., Dong Q., Hong D., Lu R. Amino acid encoding methods for protein sequences: a comprehensive review and assessment. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020;17(6):1918–1931. doi: 10.1109/TCBB.2019.2911677. [DOI] [PubMed] [Google Scholar]
  • 8.Cai Y., Wang J., Deng L. SDN2GO : an integrated deep learning model for protein function prediction. Front. Bioeng. Biotechnol. 2020;8(April):1–11. doi: 10.3389/fbioe.2020.00391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kulmanov M., Khan M.A., Hoehndorf R. DeepGO : predicting protein functions from sequence and interactions using a deep ontology-aware classifier. Bioinformatics. 2018;34:660–668. doi: 10.1093/bioinformatics/btx624. October 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hoehndorf R. DeepGOPlus : improved protein function prediction from sequence. Bioinformatics. 2020;36:422–429. doi: 10.1093/bioinformatics/btz595. July 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Zhou G., Wang J., Zhang X., Yu G. Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2020. DeepGOA: predicting gene ontology annotations of proteins via graph convolutional network; pp. 1836–1841. [Google Scholar]
  • 12.Szegedy C., Liu W., Jia Y., Sermanet P., Reed S.E., Anguelov D., et al. Going deeper with convolutions. CoRR. 2014 [Internet]abs/1409.4. Available from http://arxiv.org/abs/1409.4842. [Google Scholar]
  • 13.Ioffe Sergey, Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. International conference on machine learning. PMLR. 2015 [Google Scholar]
  • 14.Glorot X., Bordes A. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. Vol. 15. 2011. Deep sparse rectifier neural networks; pp. 315–323. [Google Scholar]
  • 15.Hinton G. Dropout : a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014;15:1929–1958. [Google Scholar]
  • 16.Kingma D.P., Ba J.L. Proceedings of the International Conference on Learning Representations (ICLR) 2014. Adam: a method for stochastic optimization; pp. 1–15. [Google Scholar]
  • 17.Aurelio Y.S., de Almeida G.M., de Castro C.L., Braga A.P. Learning from imbalanced data sets with weighted cross-entropy function. Neural Process. Lett. 2019;50(2):1937–1949. doi: 10.1007/s11063-018-09977-1. [Internet]Available from. [DOI] [Google Scholar]
  • 18.Camacho C., Coulouris G., Avagyan V., Ma N., Papadopoulos J., Bealer K., et al. BLAST+: architecture and applications. BMC Bioinform. 2009 Dec;10:421. doi: 10.1186/1471-2105-10-421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Hanley A., Mcneil J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29–36. doi: 10.1148/radiology.143.1.7063747. [DOI] [PubMed] [Google Scholar]
  • 20.Davis J., Goadrich M. Proceedings of the 23rd International Conference on Machine Learning. 2006. The relationship between precision-recall and ROC curves; pp. 233–240. [Google Scholar]
  • 21.Matthews B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta Protein Struct. 1975;405:442–451. doi: 10.1016/0005-2795(75)90109-9. [DOI] [PubMed] [Google Scholar]
  • 22.C. Cao, D. Chicco, M.M. Hoffman The MCC-F1 curve : a performance evaluation technique for binary classification. arXiv Prepr arXiv200611278. 2020;1–17.
  • 23.Flydal M.I., Martinez A. Phenylalanine hydroxylase: Function, structure, and regulation. IUBMB Life. 2013;65(4):341–349. doi: 10.1002/iub.1150. [Internet]Apr 1Available from. [DOI] [PubMed] [Google Scholar]
  • 24.Arturo E.C., Gupta K., Hansen M.R., Borne E., Jaffe E.K. Biophysical characterization of full-length human phenylalanine hydroxylase provides a deeper understanding of its quaternary structure equilibrium. J. Biol. Chem. 2019;294(26):10131–10145. doi: 10.1074/jbc.RA119.008294. [Internet]Jun 28Available from http://www.jbc.org/content/294/26/10131.abstract. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Andersen O.A., Stokka A.J., Flatmark T., Hough E. 2.0Å resolution crystal structures of the ternary complexes of human phenylalanine hydroxylase catalytic domain with tetrahydrobiopterin and 3-(2-Thienyl)-l-alanine or l-norleucine: substrate specificity and molecular motions related to substrate binding. J. Mol. Biol. 2003;333(4):747–757. doi: 10.1016/j.jmb.2003.09.004. [Internet]Available from http://www.sciencedirect.com/science/article/pii/S002228360301115X. [DOI] [PubMed] [Google Scholar]
  • 26.Jurcik A., Bednar D., Byska J., Marques S.M., Furmanova K., Daniel L., et al. CAVER analyst 2.0: analysis and visualization of channels and tunnels in protein structures and molecular dynamics trajectories. Bioinformatics. 2018;34(20):3586–3588. doi: 10.1093/bioinformatics/bty386. [Internet]Oct 15Available from. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Andreas Andersen O., Flatmark T., Hough E. Crystal structure of the ternary complex of the catalytic domain of human phenylalanine hydroxylase with tetrahydrobiopterin and 3-(2-Thienyl)-l-alanine, and its Implications for the mechanism of catalysis and substrate activation. J. Mol. Biol. 2002;320(5):1095–1108. doi: 10.1016/s0022-2836(02)00560-0. [Internet]Available from http://www.sciencedirect.com/science/article/pii/S0022283602005600. [DOI] [PubMed] [Google Scholar]
  • 28.Flydal M.I., Alcorlo-Pagés M., Johannessen F.G., Martínez-Caballero S., Skjærven L., Fernandez-Leiro R., et al. Structure of full-length human phenylalanine hydroxylase in complex with tetrahydrobiopterin. Proc. Natl. Acad. Sci. 2019;116(23):11229–11234. doi: 10.1073/pnas.1902639116. [Internet]Available from https://www.pnas.org/content/116/23/11229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Landrum M.J., Lee J.M., Benson M., Brown G.R., Chao C., Chitipiralla S., et al. ClinVar: Improving access to variant interpretations and supporting evidence. Nucleic Acids Res. 2018;46(D1):D1062–D1067. doi: 10.1093/nar/gkx1153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chen C., Zhao Z., Ren Y., Kong X. Characteristics of PAH gene variants among 113 phenylketonuria patients from Henan Province. Zhonghua Yi Xue Yi Chuan Xue Za Zhi. 2018;35(6):791–795. doi: 10.3760/cma.j.issn.1003-9406.2018.06.003. Dec. [DOI] [PubMed] [Google Scholar]
  • 31.Lee Y., Lee D.H., Kim N., Lee S., Ahn J.Y., Choi T., et al. Mutation analysis of PAH gene and characterization of a recurrent deletion mutation in Korean patients with phenylketonuria. Exp. Mol. Med. 2008;40(5):533–540. doi: 10.3858/emm.2008.40.5.533. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Saito T., Rehmsmeier M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS One. 2015;10(3) doi: 10.1371/journal.pone.0118432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zhang F., Song H., Zeng M., Li Y., Kurgan L., Li M. DeepFunc: A deep learning framework for accurate prediction of protein functions from protein sequences and interactions. Proteomics. 2019;19(12):1–7. doi: 10.1002/pmic.201900019. [DOI] [PubMed] [Google Scholar]
  • 34.Strodthoff N., Wagner P., Wenzel M., Samek W. UDSMProt: universal deep sequence models for protein classification. Bioinformatics. 2020;36(8):2401–2409. doi: 10.1093/bioinformatics/btaa003. [Internet]Available from. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Ranjan A., Fahad M.S., Fernández-Baca D., Deepak A., Tripathi S. Deep robust framework for protein function prediction using variable-length protein sequences. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020;17(5):1648–1659. doi: 10.1109/TCBB.2019.2911609. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

mmc1.docx (16.4KB, docx)

Articles from MethodsX are provided here courtesy of Elsevier

RESOURCES