Skip to main content
. Author manuscript; available in PMC: 2021 Sep 27.
Published in final edited form as: Data Min Knowl Discov. 2020 Nov 17;35(1):46–87. doi: 10.1007/s10618-020-00722-8

Fig. 7.

Fig. 7

An illustration of an AE. The first part of the network, called the encoder, compresses input into a latent-space by learning the function h = f(x). The second part, called the decoder, reconstructs the input from the latent-space representation by learning the function y^=g(h).