Skip to main content
. 2023 Feb 7;4(1):011306. doi: 10.1063/5.0091135

FIG. 3.

FIG. 3.

Examples of deep learning architectures. Models depicted in (a), (b), (c), and (d) are examples of supervised learning, and networks shown in (e) and (f) are unsupervised. (a) An example of an FFNN architecture with gene expression count as its input. (b) An example of CNN architecture, where the model passes the inputs through the three stages of a CNN (with non-linear activation not depicted) to extract features. Then, outputs are flattened and fed into a fully connected layer (or layers). (c) The general training flow of an RNN, with the unrolled version showing the time step-dependent inputs, hidden state, and outputs. The inputs to RNNs need to have a sequential structure (e.g., time-series data). (d) An illustration of an ResNet. In traditional ResNets, there are identity mappings (or skip connections) that pass the input of a residual block to its output (often through addition). (e) Here, we show the general architecture of a trained denoising AE in the inference stage, with a noisy histology slide as its input, yielding a denoised version of the input image. (6) A depiction of a traditional VAE in the inference stage. VAE's aim is to generate synthetic data that closely resembles the original input. This is done through regularizing the latent space of an AE with the use of a probabilistic encoder and decoder.