0% found this document useful (0 votes)
15 views

Generative Adversarial Networks

Uploaded by

hputluri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Generative Adversarial Networks

Uploaded by

hputluri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 10

GAN

Generative Adversarial Networks


GANs
• Generative Adversarial Networks
• Generative Models
– We try to learn the underlying the distribution from which
our dataset comes from.
– Eg: Variational AutoEncoders (VAE)
• Adversarial Training
– GANS are made up of two competing networks
(adversaries) that are trying beat each other
• Networks
– Neural Networks
Introduction
• Generative Adversarial Networks (GANs) are a powerful class
of neural networks that are used for unsupervised learning.
• It was developed and introduced by Ian J. Goodfellow in 2014.
• GANs are basically made up of a system of two competing
neural network models which compete with each other and
are able to analyze, capture and copy the variations within a
dataset.
• GANs can create anything whatever we feed to them, as it
Learn- Generate-Improve
Introduction
• GANs are generative models that generates new samples
based on learning the regularities or patterns in input data.
– Note generative modeling is an unsupervised learning task in machine
learning
• GANs has a clever way of training a generative model by
framing the problem as a supervised learning problem with
two sub-models or neural networks :
– generator model – is trained to generate new samples
– discriminator model-tries to classify examples as either real (from the
domain) or fake (generated).
– These two networks compete against each other
• Application of GANs.
– Image Super-Resolution
– Creating Art.
– Image-to-Image Translation
– Data Augmentation
Working
• Generator:
– The generator takes random noise as input and generates data
samples.
– These generated samples start off as random noise but
gradually become more like the real data from the training set
as the GAN is trained.
– It learns to map the random noise to data samples in a way
that, ideally, it becomes indistinguishable from real data.
• Discriminator:
– The discriminator acts as a classifier.
– Its purpose is to distinguish between real data samples from
the training set and the fake data generated by the generator.
– The discriminator is trained on real data and the generated
data and learns to assign high probabilities to real data and low
probabilities to generated data.
Working
• The training process of GANs can be described as a two-player
minimax game
• The generator's objective is to generate data that is convincing
enough to fool the discriminator.
– Its loss function is minimized when the discriminator classifies the
generated data as real.
• The discriminator's objective is to become better at
distinguishing real data from fake data.
– Its loss function is minimized when it correctly classifies real data as real
and generated data as fake.
• During training, the generator and discriminator play this game
in a competitive manner.
• The generator tries to improve its ability to generate realistic
data, while the discriminator aims to improve its ability to
differentiate between real and fake data.
Steps in training
Steps
• Define GAN architecture based on application
• Train Discriminator to distinguish real or fake using the
current ability of the Generator.
• Train the generator to fake data that can fool the
discriminator
• Continue discriminator and generator training for multiple
epochs such that generated images are classified incorrectly
by the Discriminator!
• Save generator model to create new , realistic fake data
Why we need GAN
• Most of the mainstream neural nets can be easily fooled into
misclassifying things by adding only a small amount of noise
into the original data.
• Sometimes the model after adding noise has higher
confidence in the wrong prediction than when it predicted
correctly.
• The reason for such adversary is that most machine learning
models learn from a limited amount of data, which is a huge
drawback, as it is prone to overfitting.
• Also, the mapping between the input and the output is almost
linear and even a small change in a point in the feature space
might lead to misclassification of data.

You might also like