Skip to main content
. 2024 Mar 13;25(2):bbae082. doi: 10.1093/bib/bbae082

Table 1.

Summary of DL models and formulas

Model Explanation Formula Denotation
DNNs Feed-forward neural networks with multiple hidden layers and activation functions. Approximate nonlinear transformations for specific goals. Inline graphic Inline graphic : output of neuron Inline graphic, Inline graphic: output of neuron Inline graphic, Inline graphic: weight of neuron Inline graphic, Inline graphic: bias,Inline graphic: activation function
AEs Deep generative models for dimensionality reduction. Encode input data into latent variables and reconstruct input data. Inline graphic Inline graphic : latent variable, Inline graphic: encoder network, Inline graphic: reconstructed input, Inline graphic: decoder network
VAEs Encode inputs as distributions over the latent space. Learn latent features through multi-layer neural networks. Inline graphic Inline graphic : latent variable, Inline graphic: conditional distribution, Inline graphic: reconstructed input
CNNs Supervised models for image processing. Extract features from multidimensional input data using convolutional and pooling layers. Inline graphic Inline graphic : feature map, Inline graphic: input matrix, Inline graphic: kernel
GNNs Generalized models for graph data processing. Aggregate and transform node information through various network architectures like graph convolution network (GCN) and graph attention network (GAT). Inline graphic Inline graphic : hidden layer, Inline graphic: layer, Inline graphic: adjacency matrix, Inline graphic: function that aggregates and transforms the hidden state of nodes.