Auto-Encoders are a popular type of unsupervised artificial neural network that takes unlabeled data and learns efficient codings about the structure...
Auto-Encoders are a popular type of unsupervised artificial neural network that takes unlabeled data and learns efficient codings about the structure of the data that can be used for another context. Auto-Encoders approximate the function that maps the data from full input space to lower dimension coordinates and further approximates to the same dimension of input space with minimum loss.
For classification or regression tasks, auto-encoders can be used to extract features from the raw data to improve the robustness of the model. There are various other applications of an Auto-Encoder network, that can be used for some other context. We will discuss 7 applications of auto-encoders in this article.
Before diving into the applications of AutoEncoders, let’s discuss briefly what exactly is Auto-Encoder network is.
Autoencoders are unsupervised neural networks that try to reconstruct the output layer as similar as the input layer. An autoencoder architecture has two parts:
The autoencoder first compresses the input vector into lower dimensional space and then tries to reconstruct the output by minimizing the reconstruction error. The autoencoder tries to reconstruct the output vector as similar as possible to the input layer.
There are various types of autoencoders including regularized, concrete, and variational autoencoders. Refer to the Wikipedia page for autoencoders to know more about the variations of autoencoders in detail.
Autoencoders train the network to explain the natural structure in the data into efficient lower-dimensional representation. It does this by using decoding and encoding strategy to minimize the reconstruction error.
The input and the output dimension have 3000 dimensions, and the desired reduced dimension is 200. We can develop a 5-layer network where the encoder has 3000 and 1500 neurons similar to the decoder network.
The vector embeddings of the compressed input layer can be considered as a reduced dimensional embedding of the input layer.
Autoencoders can be used as a feature extractor for classification or regression tasks. Autoencoders take un-labeled data and learn efficient codings about the structure of the data that can be used for supervised learning tasks.
After training an autoencoder network using a sample of training data, we can ignore the decoder part of the autoencoder, and only use the encoder to convert raw input data of higher dimension to a lower dimension encoded space. This lower dimension of data can be used as a feature for supervised tasks.
The real-world raw input data is often noisy in nature, and training a robust supervised model requires cleaned and noiseless data. Autoencoders can be used to denoise the data.
Image denoising is one of the popular applications where the autoencoders try to reconstruct the noiseless image from a noisy input image.
The noisy input image is fed into the autoencoder as input and the output noiseless output is reconstructed by minimizing the reconstruction loss from the original target output (noiseless). Once the autoencoder weights are trained, they can be further used to denoise the raw image.
Image compression is another application of an autoencoder network. The raw input image can be passed to the encoder network and obtained a compressed dimension of encoded data. The autoencoder network weights can be learned by reconstructing the image from the compressed encoding using a decoder network.
Usually, autoencoders are not that good for data compression, rather basic compression algorithms work better.
Autoencoders can be used to compress the database of images. The compressed embedding can be compared or searched with an encoded version of the search image.
Anomaly detection is another useful application of an autoencoder network. An anomaly detection model can be used to detect a fraudulent transaction or any highly imbalanced supervised tasks.
The idea is to train autoencoders on only sample data of one class (majority class). This way the network is capable of re-constructing the input with good or less reconstruction loss. Now, if a sample data of another target class is passed through the autoencoder network, it results in a comparatively larger reconstruction loss.
A threshold value of reconstruction loss (anomaly score) can be decided, larger than that can be considered an anomaly.
Denoising autoencoders can be used to impute the missing values in the dataset. The idea is to train an autoencoder network by randomly placing missing values in the input data and trying to reconstruct the original raw data by minimizing the reconstruction loss.
Once the autoencoder weights are trained the records having missing values can be passed through the autoencoder network to reconstruct the input data, that too with imputed missing features.
(Image by Author), Imputing missing value with a denoising autoencoder
In this article, we have discussed a brief overview of various applications of an autoencoder. For image reconstruction, we can use a variation of autoencoder called convolutional autoencoder that minimizes the reconstruction errors by learning the optimal filters.
In my upcoming articles, I will implement each of the above-discussed applications.