Autoencoders
Last updated
Last updated
An autoencoder is composed by two concatenated NN, generally trained as a whole:Encoder: tries to get a selected set of features (latent space) by compressing the input into a lesser dimensional spaceDecoder: tries to reconstruct the original input as close as possibleSome applications: denoising images, anomalies detection, recommendation engines, generative models etc.After large training step, we can the play, with just the 'decoder' part and use new inputs in order to create generative modelsThe 'magic' component of generative features of NNAutoencoders could be difficult to use in generative models, so Variational Autoencoders where introduced to better work with the latent space (median and standard deviation)Forcing the NN to work to learn the latent space with means and standard deviation, will allow us much more natural way to create new elements (generative models)Random generated new facesNew faces, but why not increase their resolution? Why faces? Why not microscopic images of something and get much more resolution? Why not to generate synthetic new data of a dataset of images of a small size? etc.7 Applications of Auto-Encoders every Data Scientist should knowMediumMore possibilities