Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
10 Machine Learning Blueprints You Should Know for Cybersecurity

You're reading from  10 Machine Learning Blueprints You Should Know for Cybersecurity

Product type Book
Published in May 2023
Publisher Packt
ISBN-13 9781804619476
Pages 330 pages
Edition 1st Edition
Languages
Author (1):
Rajvardhan Oak Rajvardhan Oak
Profile icon Rajvardhan Oak

Table of Contents (15) Chapters

Preface Chapter 1: On Cybersecurity and Machine Learning Chapter 2: Detecting Suspicious Activity Chapter 3: Malware Detection Using Transformers and BERT Chapter 4: Detecting Fake Reviews Chapter 5: Detecting Deepfakes Chapter 6: Detecting Machine-Generated Text Chapter 7: Attributing Authorship and How to Evade It Chapter 8: Detecting Fake News with Graph Neural Networks Chapter 9: Attacking Models with Adversarial Machine Learning Chapter 10: Protecting User Privacy with Differential Privacy Chapter 11: Protecting User Privacy with Federated Machine Learning Chapter 12: Breaking into the Sec-ML Industry Index Other Books You May Enjoy

Attacking text models

Please note that this section contains examples of hate speech and racist content online.

Just as with images, text models are also susceptible to adversarial attacks. Attackers can modify the text so as to trigger a misclassification by ML models. Doing so can allow an adversary to escape detection.

A good example of this can be seen on social media platforms. Most platforms have rules against abusive language and hate speech. Automated systems such as keyword-based filters and ML models are used to detect such content, flag it, and remove it. If something outrageous is posted, the platform will block it at the source (that is, not allow it to be posted at all) or remove it in the span of a few minutes.

A malicious adversary can purposely manipulate the content in order to fool a model into thinking that the words are out of vocabulary or are not certain abusive words. For example, according to a study (Poster | Proceedings of the 2019 ACM SIGSAC Conference...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}