Reader small image

You're reading from  Machine Learning for Imbalanced Data

Product typeBook
Published inNov 2023
Reading LevelBeginner
PublisherPackt
ISBN-139781801070836
Edition1st Edition
Languages
Concepts
Right arrow
Authors (2):
Kumar Abhishek
Kumar Abhishek
author image
Kumar Abhishek

Kumar Abhishek is a seasoned Senior Machine Learning Engineer at Expedia Group, US, specializing in risk analysis and fraud detection for Expedia brands. With over a decade of experience at companies such as Microsoft, Amazon, and a Bay Area startup, Kumar holds an MS in Computer Science from the University of Florida.
Read more about Kumar Abhishek

Dr. Mounir Abdelaziz
Dr. Mounir Abdelaziz
author image
Dr. Mounir Abdelaziz

Dr. Mounir Abdelaziz is a deep learning researcher specializing in computer vision applications. He holds a Ph.D. in computer science and technology from Central South University, China. During his Ph.D. journey, he developed innovative algorithms to address practical computer vision challenges. He has also authored numerous research articles in the field of few-shot learning for image classification.
Read more about Dr. Mounir Abdelaziz

View More author details
Right arrow

Data Imbalance in Deep Learning

Class imbalanced data is a common issue for deep learning models. When one or more classes have significantly fewer samples, the performance of deep learning models can suffer as they tend to prioritize learning from the majority class, resulting in poor generalization for the minority class(es).

A lot of real-world data is imbalanced, which presents challenges to deep learning classification tasks. Figure 6.1 shows some common categories of imbalanced data problems in various deep learning applications:

Figure 6.1 – Some common categories of imbalanced data problems

We will cover the following topics in this chapter:

  • A brief introduction to deep learning
  • Data imbalance in deep learning
  • Overview of deep learning techniques to handle data imbalance
  • Multi-label classification

By the end of this chapter, we’ll have a foundational understanding of deep learning and neural networks...

Technical requirements

In this chapter, we will utilize common libraries such as numpy, scikit-learn, and PyTorch. PyTorch is an open source machine learning library that’s used for deep learning tasks and has grown in popularity recently because of its flexibility and ease of use.

You can install PyTorch using pip or conda. Visit the official PyTorch website (https://pytorch.org/get-started/locally/) to get the appropriate command for your system configuration.

The code and notebooks for this chapter are available on GitHub at https://github.com/PacktPublishing/Machine-Learning-for-Imbalanced-Data/tree/main/chapter06.

A brief introduction to deep learning

Deep learning is a subfield of machine learning that focuses on artificial neural networks with multiple layers (deep models typically have three or more layers, including input, output, and hidden layers). These models have demonstrated remarkable capabilities in various applications, including image and speech recognition, natural language processing, and autonomous driving.

The prevalence of “big data” (large volumes of structured or unstructured data, often challenging to manage with traditional data processing software) problems greatly benefited from the development of Graphical Processing Units (GPUs), which were initially designed for graphics processing.

In this section, we will provide a concise introduction to the foundational elements of deep learning, discussing only what is necessary for the problems associated with data imbalance in deep learning. For a more in-depth introduction, we recommend referring to a more...

Data imbalance in deep learning

While many classical machine learning problems that use tabular data are limited to binary classes and are interested in predicting the minority class, this is not the norm in domains where deep learning is often applied, especially computer vision or NLP problems.

Even benchmark datasets such as MNIST (a collection of handwritten digits containing grayscale images from 0 to 9) and CIFAR10 (color images with 10 different classes) have 10 classes to predict. So, we can say that multi-class classification is typical in problems that use deep learning models.

This data skew or imbalance can severely impact the model performance. We should review what we discussed about the typical kinds of imbalance in datasets in Chapter 1, Introduction to Data Imbalance in Machine Learning. To simulate real-world data imbalance scenarios, two types of imbalance are usually investigated in the literature:

  • Step imbalance: All the minority classes have the...

Overview of deep learning techniques to handle data imbalance

Much like the first half of this book, where we focused on classical machine learning techniques, the major categories typically include sampling techniques, cost-sensitive techniques, threshold adjustment techniques, or a combination of these:

  • The sampling techniques comprise either undersampling the majority class or oversampling the minority class data. Data augmentation is a fundamental technique in computer vision problems that’s used to increase the diversity of the training set. While not directly an oversampling method aimed at addressing class imbalance, data augmentation does have the effect of expanding the training data. We will discuss these techniques in more detail in Chapter 7, Data-Level Deep Learning Methods.
  • Cost-sensitive techniques usually involve changing the model loss function in some way to accommodate the higher cost of misclassifying the minority class examples. Some standard...

Multi-label classification

Multi-label classification is a classification task where each instance can be assigned to multiple classes or labels simultaneously. In other words, an instance can belong to more than one category or have multiple attributes. For example, a movie can belong to multiple genres, such as action, comedy, and romance. Similarly, an image can have multiple objects in it (Figure 6.14):

Figure 6.14 – Multi-label image classification with prediction probabilities shown

But how is it different from multi-class classification? Multi-class classification is a classification task where each instance can be assigned to only one class or label. In this case, the classes or categories are mutually exclusive, meaning an instance can belong to just one category. For example, a handwritten digit recognition task would be multi-class since each digit can belong to only one class (0-9).

In summary, the main difference between multi-label...

Summary

Deep learning has become essential in many fields, from computer vision and natural language processing to healthcare and finance. This chapter has provided a brief introduction to the core concepts and techniques in deep learning. We talked about PyTorch, the fundamentals of deep learning, activation functions, and data imbalance challenges. We also got a bird’s-eye view of the various techniques we will discuss in the following few chapters.

Understanding these fundamentals will equip you with the knowledge necessary to explore more advanced topics and applications and ultimately contribute to the ever-evolving world of deep learning.

In the next chapter, we will look at data-level deep learning methods.

Questions

  1. What are some challenges in porting data imbalance handling methods from classical machine learning models to deep learning models?
  2. How could an imbalanced version of the MNIST dataset be created?
  3. Use the MNIST dataset to train a CNN model with varying degrees of imbalance in the data. Record the model’s overall accuracy on a fixed test set. Plot how the overall accuracy changes as the imbalance in the training data increases. Observe whether the overall accuracy declines as the training data becomes more imbalanced.
  4. What is the purpose of using random oversampling with deep learning models?
  5. What are some of the data augmentation techniques that can be applied when dealing with limited or imbalanced data?
  6. How does undersampling work in handling data imbalance, and what are its limitations?
  7. Why is it important to ensure that the data augmentation techniques preserve the original labels of the dataset?

References

  1. A. W. Trask, Grokking Deep Learning (Manning, Shelter Island, NY, 2019).
  2. F. Chollet, Deep Learning with Python. Manning Publications, 2021.
  3. Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie, “Class-Balanced Loss Based on Effective Number of Samples,” p. 10.
  4. K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, Learning Imbalanced Datasets with Label- Distribution-Aware Margin Loss [Online]. Available at https://proceedings.neurips.cc/paper/2019/file/621461af90cadfdaf0e8d4cc25129f91-Paper.pdf.
  5. R. Jantanasukon and A. Thammano, Adaptive Learning Rate for Dealing with Imbalanced Data in Classification Problems. In 2021 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunication Engineering, Cha-am, Thailand: IEEE, Mar. 2021, pp. 229–232, doi: 10.1109/ECTIDAMTNCON51128.2021.9425715.
  6. H.-J. Ye, H.-Y. Chen, D.-C. Zhan, and W.-L. Chao...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Machine Learning for Imbalanced Data
Published in: Nov 2023Publisher: PacktISBN-13: 9781801070836
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Kumar Abhishek

Kumar Abhishek is a seasoned Senior Machine Learning Engineer at Expedia Group, US, specializing in risk analysis and fraud detection for Expedia brands. With over a decade of experience at companies such as Microsoft, Amazon, and a Bay Area startup, Kumar holds an MS in Computer Science from the University of Florida.
Read more about Kumar Abhishek

author image
Dr. Mounir Abdelaziz

Dr. Mounir Abdelaziz is a deep learning researcher specializing in computer vision applications. He holds a Ph.D. in computer science and technology from Central South University, China. During his Ph.D. journey, he developed innovative algorithms to address practical computer vision challenges. He has also authored numerous research articles in the field of few-shot learning for image classification.
Read more about Dr. Mounir Abdelaziz