On the left, we depict an autoencoder consisting of an input layer, a low-dimensional hidden layer (latent space), and output layer. In the training phase, a low-dimensional model for data is learned and in the synthesis phase, a sample from this model is generated and used to generate a new image. This architecture is applied to auto-fluorescence images of 1,700 different brains (25 micron resolution) to synthesize new images: on the right, a synthetically generated image (top), example of a real image used to train the network (bottom), and a denoised (reconstructed) version of the image displayed on the bottom.