Auto Encoder
Auto Encoder
Autoencoder
2 Architecture
•Encoder: compress input into a latent-space of usually smaller dimension. h = f(x)
•Decoder: reconstruct input from the latent space. r = g(f(x)) with r as close to x as possible
3
Autoencoder
•It basically has three layers: input, hidden (Bottleneck) and output layer
X X’
4
Autoencoder
Traditionally, autoencoders were used for dimensionality reduction or feature learning.
Hyperparameters of Autoencoders:
•Code size: It represents the number of nodes in the middle layer. Smaller size results in more compression.
•Number of nodes per layer: The number of nodes per layer decreases with each subsequent layer of the encoder, and
increases back in the decoder. The decoder is symmetric to the encoder in terms of the layer structure.
•Loss function: We either use mean squared error or binary cross-entropy. If the input values are in the range [0, 1] then we
6
Autoencoder types
Undercomplete autoencoders
Regularized autoencoders
Sparse autoencoders
Denoising autoencoder
Contractive autoencoders
7
Undercomplete Autoencoders
• Denoising: input clean image + noise and train to reproduce the clean image.
● Image colorization: input black and white and train to produce color images
10