Reader small image

You're reading from  Generative Adversarial Networks Projects

Product typeBook
Published inJan 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789136678
Edition1st Edition
Languages
Right arrow
Author (1)
Kailash Ahirwar
Kailash Ahirwar
author image
Kailash Ahirwar

Kailash Ahirwar is a machine learning and deep learning enthusiast. He has worked in many areas of Artificial Intelligence (AI), ranging from natural language processing and computer vision to generative modeling using GANs. He is a co-founder and CTO of Mate Labs. He uses GANs to build different models, such as turning paintings into photos and controlling deep image synthesis with texture patches. He is super optimistic about AGI and believes that AI is going to be the workhorse of human evolution.
Read more about Kailash Ahirwar

Right arrow

Problems with training GANs

As with any technology, there are some problems associated with GANs. These problems are generally to do with the training process and include mode collapse, internal covariate shifts, and vanishing gradients. Let's look at these in more detail.

Mode collapse

Mode collapse is a problem that refers to a situation in which the generator network generates samples that have little variety or when a model starts generating the same images. Sometimes, a probability distribution is multimodal and very complex in nature. This means that it might contain data from different observations and that it might have multiple peaks for different sub-graphs of samples. Sometimes, GANs fail to model a multimodal probability distribution of data and suffer from mode collapse. A situation in which all the generated samples are virtually identical is known as complete collapse.

There are many methods that we can use to overcome the mode collapse problem. These include the following:

  • By training multiple models (GANs) for different modes

  • By training GANs with diverse samples of data

Vanishing gradients

During backpropagation, gradient flows backward, from the final layer to the first layer. As it flows backward, it gets increasingly smaller. Sometimes, the gradient is so small that the initial layers learn very slowly or stop learning completely. In this case, the gradient doesn't change the weight values of the initial layers at all, so the training of the initial layers in the network is effectively stopped. This is known as the vanishing gradients problem.

This problem gets worse if we train a bigger network with gradient-based optimization methods. Gradient-based optimization methods optimize a parameter's value by calculating the change in the network's output when we change the parameter's value by a small amount. If a change in the parameter's value causes a small change in the network's output, the weight change will be very small, so the network stops learning.

This is also a problem when we use activation functions, such as Sigmoid and Tanh. Sigmoid activation functions restrict values to a range of between 0 and 1, converting large values of x to approximately 1 and small or negative values of x to approximately zero. The Tanh activation function squashes input values to a range between -1 and 1, converting large input values to approximately 1 and small values to approximately minus 1. When we apply backpropagation, we use the chain rule of differentiation, which has a multiplying effect. As we reach the initial layers of the network, the gradient (the error) decreases exponentially, causing the vanishing gradients problem.

To overcome this problem, we can use activation functions such as ReLU, LeakyReLU, and PReLU. The gradients of these activation functions don't saturate during backpropagation, causing efficient training of neural networks. Another solution is to use batch normalization, which normalizes inputs to the hidden layers of the networks.

Internal covariate shift

An internal covariate shift occurs when there is a change in the input distribution to our network. When the input distribution changes, hidden layers try to learn to adapt to the new distribution. This slows down the training process. If a process slows down, it takes a long time to converge to a global minimum. This problem occurs when the statistical distribution of the input to the networks is drastically different from the input that it has seen before. Batch normalization and other normalization techniques can solve this problem. We will explore these in the following sections.

Previous PageNext Page
You have been reading a chapter from
Generative Adversarial Networks Projects
Published in: Jan 2019Publisher: PacktISBN-13: 9781789136678
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Kailash Ahirwar

Kailash Ahirwar is a machine learning and deep learning enthusiast. He has worked in many areas of Artificial Intelligence (AI), ranging from natural language processing and computer vision to generative modeling using GANs. He is a co-founder and CTO of Mate Labs. He uses GANs to build different models, such as turning paintings into photos and controlling deep image synthesis with texture patches. He is super optimistic about AGI and believes that AI is going to be the workhorse of human evolution.
Read more about Kailash Ahirwar