Reader small image

You're reading from  Generative Adversarial Networks Cookbook

Product typeBook
Published inDec 2018
PublisherPackt
ISBN-139781789139907
Edition1st Edition
Right arrow
Author (1)
Josh Kalin
Josh Kalin
author image
Josh Kalin

Josh Kalin is a Physicist and Technologist focused on the intersection of robotics and machine learning. Josh works on advanced sensors, industrial robotics, machine learning, and automated vehicle research projects. Josh holds degrees in Physics, Mechanical Engineering, and Computer Science. In his free time, he enjoys working on cars (has owned 36 vehicles and counting), building computers, and learning new techniques in robotics and machine learning (like writing this book).
Read more about Josh Kalin

Right arrow

Basic building block – loss functions


Each neural network has certain structural components in order to train. The process of training is tuning the weights to optimize the loss function for the given problem set. The loss function selected for the neural network therefore is essential to ensure the neural network produces good results and converges. 

How to do it...

The generator is a neural network and requires a loss function. So, what kind of loss function should we employ in this architecture? That's almost as fundamental a question as what car you should drive. The loss functions need to be selected appropriately for the Generator to converge with the caveat that the loss function selection will depend on what's your goal for it.

How it works...

Each of the diverse architectures we'll cover in this book will use different tools to get different results. Take, for instance, the generator loss function from the initial GAN paper by Goodfellow and his associates: 

Loss function used with the Generator in adversarial training

This equation simply states that the discriminator is minimizing the log probability that the discriminator is correct. It's part of the adversarial mode of training that occurs. Another thing to consider in this context is that the loss function of the generator does matter. Gradient Saturation, an issue that occurs when the learning gradients are near zero and make learning nearly impossible, can occur for poorly-designed loss functions. The selection of the correct loss function is imperative even for the generator.

Now, let's check out the loss function of the discriminator from the Goodfellow paper:

Standard cross-entropy implementation applied to GANs

This is a standard cross-entropy implementation. Essentially, one of the unique things about this equation is how it is trained through multiple mini-batches. We'll talk about that in a later section in this chapter.

 

As mentioned before, the discriminator acts as a learned loss function for the overall architecture. When building each of the models though and in paired GAN architectures, it is necessary to have multiple loss functions. In this case, let's define a template class for the loss function in order to store these loss methods:

The class template for loss functions that will be optionally implemented depending on the availability of the lost functions used

During the development of these recipes, we are going to come back to these templates over and over again. A bit of standardization to the code base will go a long way in ensuring that your code remains readable and maintainable. 

Previous PageNext Page
You have been reading a chapter from
Generative Adversarial Networks Cookbook
Published in: Dec 2018Publisher: PacktISBN-13: 9781789139907
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Josh Kalin

Josh Kalin is a Physicist and Technologist focused on the intersection of robotics and machine learning. Josh works on advanced sensors, industrial robotics, machine learning, and automated vehicle research projects. Josh holds degrees in Physics, Mechanical Engineering, and Computer Science. In his free time, he enjoys working on cars (has owned 36 vehicles and counting), building computers, and learning new techniques in robotics and machine learning (like writing this book).
Read more about Josh Kalin