It’s a machine learning model, where the data flow passes through a “pinch point” but comes out (hopefully) unchanged – in effect, it is 2 models in one, a data compressor and a data expander.
It can be used to compress data, so that different models (eg. Classifiers) have less data to process, so may be faster to train.
They can also be used to synthesize new fake data by using the “expander” half of the model on random or modified data.
They can also work as a type of anomaly detector – a trained auto encoder will only work reliably with data very similar to the training set. Anomalous data will tend to produce extreme values at the pinch point and not decompress correctly, ending up badly corrupted. If you check how closely the data coming out matches the input, then if there is a big difference, it is likely that there is some sort of anomaly.
Latest Answers