Fully connected neural networks |
FCNNs are the most conventional deep neural networks (DNNs). |
|
In a layer, each neuron is connected to all neurons in the |
|
subsequent layers [12]. |
Convolutional neural networks |
CNNs are able to model spatial structures such as images |
|
or DNA sequences. Each neuron is connected to all neurons |
|
in the subsequent layer. In convolution layers kernels are |
|
slide over the input data to model local information [12]. |
Recurrent neural networks |
RNNs model sequential data well by maintaining a state |
|
vector that encodes the information of previous time steps. |
|
This state is represented by the hidden units of the network |
|
and is updated at each time step [12]. |
Graph neural networks |
GNNs model graphs consisting of entities and their connec- |
|
tions representing e.g. molecules or nuclei of a tissue. Layers |
|
of GNNs can take on different forms such as convolutions and |
|
recurrence [14]. |
Autoencoders |
AEs learn a lower dimensional encoding of the input data by |
|
first compressing it and then reconstructing the original input |
|
data. Layers can be of different types such as fully connected |
|
or convolutional [15]. |