Basic concepts on AIGC
  • About the course materials
  • General Course Format and Strategies
  • Introduction
  • Foundations for AIGC
    • Computers and content generation
    • A brief introduction to AI
      • What AI is?
      • What ML is?
      • What DL is?
      • Discriminative AI vs. Generative AI
  • Generative AI
    • Introduction to Generative AI
      • Going deeper into Generative AI models
  • Deep Neural Networks and content generation
    • Image classification
    • Autoencoders
    • GAN: Generative Adversarial networks
    • Transformers
    • Diffusion models
      • Basic foundations of SD
  • Current image generation techniques
    • GANs
  • Current text generation techniques
    • Basic concepts in NLP in Large Language Models (LLMs)
    • How chatGPT works
  • Prompt engineering
    • Prompts for LLM
    • Prompts for image generators
  • Current AI generative tools
    • Image generation tools
      • DALL-E 2
      • Midjourney
        • More experiments with Midjourney
        • Composition and previous pictures
        • Remixing
      • Stable diffusion
        • Dreambooth
        • Fine-tuning stable diffusion
      • Other solutions
      • Good prompts, img2img, inpainting, outpainting, composition
      • A complete range on new possibilities
    • Text generation tools
      • OpenAI GPT
        • GPT is something really wide
      • ChatGPT
        • Getting the most from chatGPT
      • Other transformers: HuggingFace
      • Other solutions
      • Making the most of LLM
        • Basic possibilities
        • Emergent abilities of LLM
    • Video, 3D, sound, and more
    • Current landscape of cutting-edge AI generative tools
  • Use cases
    • Generating code
    • How to create good prompts for image generation
    • How to generate text of quality
      • Summarizing, rephrasing, thesaurus, translating, correcting, learning languages, etc.
      • Creating/solving exams and tests
  • Final topics
    • AI art?
    • Is it possible to detect AI generated content?
    • Plagiarism and copyright
    • Ethics and bias
    • AI generative tools and education
    • The potential impact of AI generative tools on the job market
  • Glossary
    • Glossary of terms
  • References
    • Main references
    • Additional material
Powered by GitBook
On this page
  1. Deep Neural Networks and content generation

Autoencoders

PreviousImage classificationNextGAN: Generative Adversarial networks

Last updated 2 years ago

An autoencoder is composed by two concatenated NN, generally trained as a whole:Encoder: tries to get a selected set of features (latent space) by compressing the input into a lesser dimensional spaceDecoder: tries to reconstruct the original input as close as possibleSome applications: denoising images, anomalies detection, recommendation engines, generative models etc.​​​​​After large training step, we can the play, with just the 'decoder' part and use new inputs in order to create generative modelsThe 'magic' component of generative features of NNAutoencoders could be difficult to use in generative models, so Variational Autoencoders where introduced to better work with the latent space (median and standard deviation)Forcing the NN to work to learn the latent space with means and standard deviation, will allow us much more natural way to create new elements (generative models)Random generated new facesNew faces, but why not increase their resolution? Why faces? Why not microscopic images of something and get much more resolution? Why not to generate synthetic new data of a dataset of images of a small size? etc.More possibilities

7 Applications of Auto-Encoders every Data Scientist should knowMedium