Table 1.
Summary of DL models and formulas
Model | Explanation | Formula | Denotation |
---|---|---|---|
DNNs | Feed-forward neural networks with multiple hidden layers and activation functions. Approximate nonlinear transformations for specific goals. |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
AEs | Deep generative models for dimensionality reduction. Encode input data into latent variables and reconstruct input data. |
![]() |
![]() ![]() ![]() ![]() |
VAEs | Encode inputs as distributions over the latent space. Learn latent features through multi-layer neural networks. |
![]() |
![]() ![]() ![]() |
CNNs | Supervised models for image processing. Extract features from multidimensional input data using convolutional and pooling layers. |
![]() |
![]() ![]() ![]() |
GNNs | Generalized models for graph data processing. Aggregate and transform node information through various network architectures like graph convolution network (GCN) and graph attention network (GAT). |
![]() |
![]() ![]() ![]() ![]() |