Reader small image

You're reading from  10 Machine Learning Blueprints You Should Know for Cybersecurity

Product typeBook
Published inMay 2023
PublisherPackt
ISBN-139781804619476
Edition1st Edition
Right arrow
Author (1)
Rajvardhan Oak
Rajvardhan Oak
author image
Rajvardhan Oak

Rajvardhan Oak is a cybersecurity expert, researcher, and scientist with a focus on machine learning solutions to security issues such as fake news, malware, and botnets. He obtained his bachelor's degree from the University of Pune, India, and his master's degree from the University of California, Berkeley. He has served on the editorial committees of multiple technical conferences and journals. His work has been featured by prominent news outlets such as WIRED magazine and the Daily Mail. In 2022, he received the ISC2 Global Achievement Award for Excellence in Cybersecurity. He is based in the Seattle area and works for Microsoft as an applied scientist in the ads fraud division.
Read more about Rajvardhan Oak

Right arrow

Attacking Models with Adversarial Machine Learning

Recent advances in machine learning (ML) and artificial intelligence (AI) have increased our reliance on intelligent algorithms and systems. ML systems are used to make decisions on the fly in several critical applications. For example, whether a credit card transaction should be authorized or not or whether a particular Twitter account is a bot or not is decided by a model within seconds, and this decision affects steps taken in the real world (such as the transaction or account being flagged as fraudulent). Attackers use the reduced human involvement to their advantage and aim to attack models deployed in the real world. Adversarial ML (AML) is a field of ML that focuses on detecting and exploiting flaws in ML models.

Adversarial attacks can come in several forms. Attackers may try to manipulate the features of a data point so that it is misclassified by the model. Another threat vector is data poisoning, where attackers introduce...

Technical requirements

Introduction to AML

In this section, we will learn about what AML exactly is. We will begin by understanding the importance ML plays in today’s world, followed by the various kinds of adversarial attacks on models.

The importance of ML

In recent times, our reliance on ML has increased. Automated systems and models are in every sphere of our life. These systems often allow for fast decision-making without the need for manual human intervention. ML is a boon to security tasks; a model can learn from historical behavior, identify and recognize patterns, extract features, and render a decision much faster and more efficiently than a human can. Examples of some ML systems handling security-critical decisions are given here:

  • Real-time fraud detection in credit card usage often uses ML. Whenever a transaction is made, the model looks at your location, the amount, the billing code, your past transactions, historical patterns, and other behavioral features. These are fed...

Attacking image models

In this section, we will look at two popular attacks on image classification systems: Fast Gradient Sign Method (FGSM) and the Projected Gradient Descent (PGD) method. We will first look at the theoretical concepts underlying each attack, followed by actual implementation in Python.

FGSM

FGSM is one of the earliest methods used for crafting adversarial examples for image classification models. Proposed by Goodfellow in 2014, it is a simple and powerful attack against neural network (NN)-based image classifiers.

FGSM working

Recall that NNs are layers of neurons placed one after the other, and there are connections from neurons in one layer to the next. Each connection has an associated weight, and the weights represent the model parameters. The final layer produces an output that can be compared with the available ground truth to calculate the loss, which is a measure of how far off the prediction is from the actual ground truth. The loss is backpropagated...

Attacking text models

Please note that this section contains examples of hate speech and racist content online.

Just as with images, text models are also susceptible to adversarial attacks. Attackers can modify the text so as to trigger a misclassification by ML models. Doing so can allow an adversary to escape detection.

A good example of this can be seen on social media platforms. Most platforms have rules against abusive language and hate speech. Automated systems such as keyword-based filters and ML models are used to detect such content, flag it, and remove it. If something outrageous is posted, the platform will block it at the source (that is, not allow it to be posted at all) or remove it in the span of a few minutes.

A malicious adversary can purposely manipulate the content in order to fool a model into thinking that the words are out of vocabulary or are not certain abusive words. For example, according to a study (Poster | Proceedings of the 2019 ACM SIGSAC Conference...

Developing robustness against adversarial attacks

Adversarial attacks can be a serious threat to the security and reliability of ML systems. Several techniques can be used to improve the robustness of ML models against adversarial attacks. Some of these are described next.

Adversarial training

Adversarial training is a technique where the model is trained on adversarial examples in addition to the original training data. Adversarial examples are generated by perturbing the original input data in such a way that the perturbed input is misclassified by the model. By training the model on both the original and adversarial examples, the model learns to be more robust to adversarial attacks. The idea behind adversarial training is to simulate the types of attacks that the model is likely to face in the real world and make the model more resistant to them.

Defensive distillation

Defensive distillation is a technique that involves training a model on soft targets rather than hard...

Summary

In recent times, human reliance on ML has grown exponentially. ML models are involved in several security-critical applications such as fraud, abuse, and other kinds of cybercrime. However, many models are susceptible to adversarial attacks, where attackers manipulate the input so as to fool the model. This chapter covered the basics of AML and the goals and strategies that attackers employ. We then discussed two popular adversarial attack methods, FGSM and PGD, along with their implementation in Python. Next, we learned about methods for manipulating text and their implementation.

Because of the importance and prevalence of ML in our lives, it is necessary for security data scientists to understand adversarial attacks and learn to defend against them. This chapter provides a solid foundation for AML and the kinds of attacks involved.

So far, we have discussed multiple aspects of ML for security problems. In the next chapter, we will pivot to a closely related topic&...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
10 Machine Learning Blueprints You Should Know for Cybersecurity
Published in: May 2023Publisher: PacktISBN-13: 9781804619476
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Rajvardhan Oak

Rajvardhan Oak is a cybersecurity expert, researcher, and scientist with a focus on machine learning solutions to security issues such as fake news, malware, and botnets. He obtained his bachelor's degree from the University of Pune, India, and his master's degree from the University of California, Berkeley. He has served on the editorial committees of multiple technical conferences and journals. His work has been featured by prominent news outlets such as WIRED magazine and the Daily Mail. In 2022, he received the ISC2 Global Achievement Award for Excellence in Cybersecurity. He is based in the Seattle area and works for Microsoft as an applied scientist in the ads fraud division.
Read more about Rajvardhan Oak