An autoencoder is a type of neural network architecture that compresses input data down to its most essential features and then reconstructs the original input from that compressed form.
During training, the network learns by comparing its reconstructed output against the original input. The gap between the two is called the reconstruction error, which the model minimizes through backpropagation and gradient descent. No labeled data is required as the original input itself serves as the reference point, making autoencoders an unsupervised learning method.
Every autoencoder has three core components: the encoder, which compresses the data; the bottleneck, which holds the most compressed version of that data as a latent space representation; and the decoder, which reconstructs the data back toward its original form.
Once trained, the autoencoder can be used in two ways.
- The full network (encoder plus decoder) takes an input, compresses it, and reconstructs it back to its original form.
- The encoder alone takes an input and compresses it down to a compact representation.
This architecture is what enables autoencoders to power applications like anomaly detection, image denoising, data compression, and image generation.