Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Deep Learning Architect's Handbook

You're reading from  The Deep Learning Architect's Handbook

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781803243795
Pages 516 pages
Edition 1st Edition
Languages
Author (1):
Ee Kin Chin Ee Kin Chin
Profile icon Ee Kin Chin

Table of Contents (25) Chapters

Preface Part 1 – Foundational Methods
Chapter 1: Deep Learning Life Cycle Chapter 2: Designing Deep Learning Architectures Chapter 3: Understanding Convolutional Neural Networks Chapter 4: Understanding Recurrent Neural Networks Chapter 5: Understanding Autoencoders Chapter 6: Understanding Neural Network Transformers Chapter 7: Deep Neural Architecture Search Chapter 8: Exploring Supervised Deep Learning Chapter 9: Exploring Unsupervised Deep Learning Part 2 – Multimodal Model Insights
Chapter 10: Exploring Model Evaluation Methods Chapter 11: Explaining Neural Network Predictions Chapter 12: Interpreting Neural Networks Chapter 13: Exploring Bias and Fairness Chapter 14: Analyzing Adversarial Performance Part 3 – DLOps
Chapter 15: Deploying Deep Learning Models to Production Chapter 16: Governing Deep Learning Models Chapter 17: Managing Drift Effectively in a Dynamic Environment Chapter 18: Exploring the DataRobot AI Platform Chapter 19: Architecting LLM Solutions Index Other Books You May Enjoy

Analyzing Adversarial Performance

An adversary, in the context of machine learning models, refers to an entity or system that actively seeks to exploit or undermine the performance, integrity, or security of these models. They can be malicious actors, algorithms, or systems designed to target vulnerabilities within machine learning models. Adversaries perform adversarial attacks, where they intentionally input misleading or carefully crafted data to deceive the model and cause it to make incorrect or unintended predictions.

Adversarial attacks can range from subtle perturbations of input data to sophisticated methods that exploit the vulnerabilities of specific algorithms. The objectives of adversaries can vary depending on the context. They may attempt to bypass security measures, gain unauthorized access, steal sensitive information, or cause disruption in the model’s intended functionality. Adversaries can also target the fairness and ethics of machine learning models...

Technical requirements

This chapter includes some practical implementations in the Python programming language. To complete it, you will need to have a computer with the following libraries installed:

  • matplotlib
  • scikit-learn
  • numpy
  • pytorch
  • accelerate==0.15.0
  • captum
  • catalyst
  • adversarial-robustness-toolbox
  • torchvision
  • pandas

The code files for this chapter are available on GitHub: https://github.com/PacktPublishing/The-Deep-Learning-Architect-Handbook/tree/main/CHAPTER_14.

Using data augmentations for adversarial analysis

The core of the adversarial performance analysis method focuses on utilizing data augmentations. Data augmentation refers to the process of introducing realistic variations to existing data programmatically. Data augmentations are commonly employed during the model training process to enhance the validation performance and generalizability of deep learning models. However, we can also leverage augmentations as an evaluation method to ensure the robustness of performance under various conditions. By applying augmentations during evaluation, practitioners can obtain a more detailed and comprehensive estimation of the model’s performance when deployed in production.

Adversarial performance analysis offers two main advantages. Firstly, it assists in building a more generalizable model by enabling better model selection during validation in training and after training between multiple trained models. This is achieved through the...

Analyzing adversarial performance for audio-based models

Adversarial analysis for audio-based models requires audio augmentations. In this section, we will be leveraging the open source audiomentations library to apply audio augmentation methods. We will analyze the adversarial accuracy-based performance of a speech recognition model practically. The accuracy metric we’ll use is the Word Error Rate (WER), which is a commonly used metric in automatic speech recognition and machine translation systems. It measures the dissimilarity between a system’s output and the reference transcription or translation by calculating the sum of word substitutions, insertions, and deletions divided by the total number of reference words, resulting in a percentage value. The formula for WER is as follows:

WER = (S + I + D) / N

Here, we have the following:

  • S represents the number of word substitutions
  • I represents the number of word insertions
  • D represents the number...

Analyzing adversarial performance for image-based models

Augmentations-based adversarial analysis can also be applied to image-based models. The key here is to discover possible degradations of accuracy-based performance in original non-existent conditions in the validation dataset. Here are some examples of components that could be evaluated by augmentations for the image domain:

  • Object of interest size: In use cases that use CCTV camera image input, adversarial analysis can help us set up the camera with an appropriate distance so that optimal performance can be achieved. The original image can be iteratively resized into various sizes and overlayed on top of a base black image to perform analysis.
  • The roll orientation of the object of interest: Pitch and yaw orientation is not straightforward to augment. However, rotation augmentation can help stress test roll orientation performance. Optimal performance can be enforced by any pose orientation detection model or system...

Exploring adversarial analysis for text-based models

Text-based models can sometimes have performance vulnerabilities toward the usage of certain words, a specific inflection of a word stem, or a different form of the same word. Here’s an example:

Supervised Use Case: Sentiment Analysis
Prediction Row: {"Text": "I love this product!", "Sentiment": "Positive"}
Adversarial Example: {"Text": "I l0ve this product!", "Sentiment": "Negative"}

So, adversarial analysis can be done by benchmarking performance on when you add important words to a sentence versus without. To mitigate such attacks, similar word replacement augmentation can be applied during training.

However, when it comes to text-based models in the modern day, most widely adopted models now rely on a pre-trained language modeling foundation. This allows them to be capable of understanding natural language even after domain fine-tuning...

Summary

In this chapter, the concept of adversarial performance analysis for machine learning models was introduced. Adversarial attacks aim to deceive models by intentionally inputting misleading or carefully crafted data to cause incorrect predictions. This chapter highlighted the importance of analyzing adversarial performance to identify potential vulnerabilities and weaknesses in machine learning models and to develop targeted mitigation methods. Adversarial attacks can target various aspects of machine learning models, which include their bias and fairness behavior, and their accuracy-based performance. For instance, facial recognition systems may be targeted by adversaries who exploit biases or discrimination present in the training data or model design.

We also explored practical examples and techniques for analyzing adversarial performance in image, text, and audio data-based models. For image-based models, various approaches such as object size, orientation, blurriness...

lock icon The rest of the chapter is locked
You have been reading a chapter from
The Deep Learning Architect's Handbook
Published in: Dec 2023 Publisher: Packt ISBN-13: 9781803243795
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}