Reader small image

You're reading from  Applied Deep Learning with Keras

Product typeBook
Published inApr 2019
Reading LevelIntermediate
Publisher
ISBN-139781838555078
Edition1st Edition
Languages
Tools
Right arrow
Authors (3):
Ritesh Bhagwat
Ritesh Bhagwat
author image
Ritesh Bhagwat

Ritesh Bhagwat has a master's degree in applied mathematics with a specialization in computer science. He has over 14 years of experience in data-driven technologies and has led and been a part of complex projects ranging from data warehousing and business intelligence to machine learning and artificial intelligence. He has worked with top-tier global consulting firms as well as large multinational financial institutions. Currently, he works as a data scientist. Besides work, he enjoys playing and watching cricket and loves to travel. He is also deeply interested in Bayesian statistics.
Read more about Ritesh Bhagwat

Mahla Abdolahnejad
Mahla Abdolahnejad
author image
Mahla Abdolahnejad

Mahla Abdolahnejad is a Ph.D. candidate in systems and computer engineering with Carleton University, Canada. She also holds a bachelor's degree and a master's degree in biomedical engineering, which first exposed her to the field of artificial intelligence and artificial neural networks, in particular. Her Ph.D. research is focused on deep unsupervised learning for computer vision applications. She is particularly interested in exploring the differences between a human's way of learning from the visual world and a machine's way of learning from the visual world, and how to push machine learning algorithms toward learning and thinking like humans.
Read more about Mahla Abdolahnejad

Matthew Moocarme
Matthew Moocarme
author image
Matthew Moocarme

Matthew Moocarme is an accomplished data scientist with more than eight years of experience in creating and utilizing machine learning models. He comes from a background in the physical sciences, in which he holds a Ph.D. in physics from the Graduate Center of CUNY. Currently, he leads a team of data scientists and engineers in the media and advertising space to build and integrate machine learning models for a variety of applications. In his spare time, Matthew enjoys sharing his knowledge with the data science community through published works, conference presentations, and workshops.
Read more about Matthew Moocarme

View More author details
Right arrow

Chapter 5. Improving Model Accuracy

Note

Learning Objectives

By the end of this chapter, you will be able to:

  • Explain the concept of regularization

  • Explain the procedures of different regularization techniques

  • Apply L1 and L2 regularization to improve accuracy

  • Apply dropout regularization to improve accuracy

  • Describe grid search and random search hyperparameter optimizers in scikit-learn

  • Use hyperparameter tuning in scikit-learn to improve model accuracy

Note

In this chapter, we will learn about the concept of regularization and different regularization techniques. We will then use regularization to improve accuracy. We will also learn how to use hyperparameter tuning to improve model accuracy.

Introduction


Deep learning is not only about building neural networks, training them using an available dataset, and reporting the model accuracy. It involves trying to understand your model and the dataset, as well as moving beyond a basic model by improving it in many aspects. In this chapter, you will learn about two very important groups of techniques for improving machine learning models in general, and deep learning models in particular. These techniques are regularization methods and hyperparameter tuning.

Regarding regularization methods, we'll first answer the questions of why we need them and how they help. We'll then introduce two of the most important and most commonly used regularization techniques. You'll learn in great detail about parameter regularization and its two variations, L1 and L2 norm regularizations. You will then learn about a regularization technique, specifically designed for neural networks, called dropout regulation. You will also practice implementing each...

Regularization


Since deep neural networks are highly flexible models, overfitting is an issue that can often arise when training them. Therefore, one very important part of becoming a deep learning expert is knowing how to detect overfitting, and subsequently how to address the overfitting problem in your model. Regularization techniques are an important group of methods specifically aimed at reducing overfitting in machine learning models. Understanding regularization techniques thoroughly and being able to apply them to your deep neural networks is an essential step toward building deep neural networks in order to solve real-life problems. In this section, you will learn about the underlying concepts of regularization, providing you with the foundation required for the following sections, where you will learn how to implement various types of regularization methods using Keras.

The Need for Regularization

The main goal of machine learning is to build models that perform well on not only...

L1 and L2 Regularization


The most common type of regularization for deep learning models is the one that keeps the weights of the network small. This type of regularization is called weight regularization and has two different variations: L2 regularization and L1 regularization. In this section, you will learn about these regularization methods in detail, along with how to implement them in Keras. Additionally, you will practice applying them to real-life problems and observe how they can improve the performance of a model.

L1 and L2 Regularization Formulation

In weight regularization, a penalizing term is added to the loss function. This term is either L2 norm (the sum of the squared values) of the weights, or L1 norm (the sum of the absolute values) of the weights. If L1 norm is used, then it will be called L1 regularization. If L2 norm is used, then it will be called L2 regularization. In each case, the sum is multiplied by a hyperparameter called a regularization parameter (lambda).

Therefore...

Dropout Regularization


In this section, you will learn about how dropout regularization works, how it helps with reducing overfitting, and how to implement it using Keras. Lastly, you will have the chance to practice what you have learned about dropout by completing an activity involving a real-life dataset.

Principles of Dropout Regularization

Dropout regularization works by randomly removing nodes from a neural network during training. More precisely, dropout sets up a probability on each node that determines the chance of that node being included in the training at each iteration of the learning algorithm. Imagine we have a large neural network where a dropout chance of 0.5 is assigned to each node. Therefore, at each iteration, the learning algorithm flips a coin for each node to decide whether that node will be removed from the network or not. An illustration of such a process is shown in the following figure. This process is repeated at each iteration; this means that at each iteration...

Other Regularization Methods


In this section, you will learn briefly about some other regularization techniques that are commonly used and have been shown to be effective in deep learning. It is important to keep in mind that regularization is a wide-ranging and active research field in machine learning. As a result, covering all available regularization methods in one chapter is not possible (and most likely not necessary, especially in a book on applied deep learning). Therefore, in this section, we will briefly cover three more regularization methods, called early stopping, data augmentation, and adding noise. You will learn briefly about their underlying ideas, and you'll gain a few tips and recommendations on how to use them.

Early Stopping

We discussed earlier in this chapter that the main assumption in machine learning is that there is a true function/process that produces training examples. However, this process is unknown and there is no explicit way to find it. Not only is there...

Hyperparameter Tuning with scikit-learn


Hyperparameter tuning is a very important technique for improving the performance of deep learning models. In Chapter 4, Evaluating your Model with Cross Validation with Keras Wrappers, you learned about using a Keras wrapper with scikit-learn, which allows for Keras models to be used in a scikit-learn workflow. As a result, different general machine learning and data analysis tools and methods available in scikit-learn can be applied to Keras deep learning models. Among those methods are scikit-learn hyperparameter optimizers. In the previous chapter, you learned how to perform hyperparameter tuning by writing user-defined functions to loop over possible values for each hyperparameter. In this section, you will learn how to perform it in a much easier way by using various hyperparameter optimization methods available in scikit-learn. You will also get to practice applying those methods by completing an activity involving a real-life dataset.

Grid Search...

Summary


In this chapter, you learned about two very important groups of techniques for improving the accuracy of your deep learning models: regularization techniques and hyperparameter-tuning techniques. You learned about how regularization helps address the overfitting problem, and had an introduction to different regularization methods. Among those methods, L1 and L2 norm regularization and dropout regularization were covered in detail, since they are very important, commonly used regularization techniques. You also learned about the importance of hyperparameter tuning for machine learning models and saw how performing hyperparameter tuning is highly challenging for deep learning models in particular. You learned how to perform hyperparameter tuning on Keras models more easily using scikit-learn optimizers.

In the next chapter, you will learn about the limitations of accuracy metrics when evaluating model performance. You will also learn about other metrics, such as precision, sensitivity...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Applied Deep Learning with Keras
Published in: Apr 2019Publisher: ISBN-13: 9781838555078
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Ritesh Bhagwat

Ritesh Bhagwat has a master's degree in applied mathematics with a specialization in computer science. He has over 14 years of experience in data-driven technologies and has led and been a part of complex projects ranging from data warehousing and business intelligence to machine learning and artificial intelligence. He has worked with top-tier global consulting firms as well as large multinational financial institutions. Currently, he works as a data scientist. Besides work, he enjoys playing and watching cricket and loves to travel. He is also deeply interested in Bayesian statistics.
Read more about Ritesh Bhagwat

author image
Mahla Abdolahnejad

Mahla Abdolahnejad is a Ph.D. candidate in systems and computer engineering with Carleton University, Canada. She also holds a bachelor's degree and a master's degree in biomedical engineering, which first exposed her to the field of artificial intelligence and artificial neural networks, in particular. Her Ph.D. research is focused on deep unsupervised learning for computer vision applications. She is particularly interested in exploring the differences between a human's way of learning from the visual world and a machine's way of learning from the visual world, and how to push machine learning algorithms toward learning and thinking like humans.
Read more about Mahla Abdolahnejad

author image
Matthew Moocarme

Matthew Moocarme is an accomplished data scientist with more than eight years of experience in creating and utilizing machine learning models. He comes from a background in the physical sciences, in which he holds a Ph.D. in physics from the Graduate Center of CUNY. Currently, he leads a team of data scientists and engineers in the media and advertising space to build and integrate machine learning models for a variety of applications. In his spare time, Matthew enjoys sharing his knowledge with the data science community through published works, conference presentations, and workshops.
Read more about Matthew Moocarme