Reader small image

You're reading from  A CISO Guide to Cyber Resilience

Product typeBook
Published inApr 2024
PublisherPackt
ISBN-139781835466926
Edition1st Edition
Right arrow
Author (1)
Debra Baker
Debra Baker
author image
Debra Baker

Debra Baker has 30 years of experience in Information Security. As CEO of TrustedCISO, Debra provides strategic cybersecurity CISO Advisory Services. She has an AI first startup aiming to power through the pain of Third Party Vendor Assessment and Compliance. Previously, Debra was CISO at RedSeal where she led the security program successfully getting SOC2 Type 2. Previously, she served as Regulatory Compliance Manager at Cisco. While at Cisco she founded the cryptographic knowledge base, CryptoDoneRight in collaboration with Johns Hopkins University. Debra was named one of the top 100 Women in Cybersecurity, "Women Know Cyber: 100 Fascinating Females Fighting Cybercrime."
Read more about Debra Baker

Right arrow

Cyber Resilience in the Age of Artificial Intelligence (AI)

This chapter is about cyber resilience in the age of AI. With ChatGPT, seemingly overnight, making AI mainstream, it has made a huge impact. I was talking to someone recently, and they said their grandmother was using ChatGPT. Right then, I knew we were at the precipice of the next great technology shift since the internet. In the 1990s, I remember there was a lot of discussion about the internet and whether it could be used for business. Today, this almost seems inconceivable, knowing that a large part of shopping and services are delivered online. Traditional stores are struggling to compete with Amazon and other online retailers. With this rush to use and deploy AI, there are new cybersecurity concerns, such as the following:

  • Data leakage
  • The use of AI by hackers
  • Bias in AI

In this chapter, we’re going to address the above concerns while covering the following main topics:

  • ChatGPT
  • ...

ChatGPT

ChatGPT has taken the world by storm. The generative AI platform allows anyone to open a free account and ask ChatGPT any question, write a letter, or help with coding. I decided to try it in early 2023. Once you start using ChatGPT, you quickly start thinking of new ways to use it. One of the big things is to not enter PII or sensitive data into ChatGPT. It now has guardrails and will reject answering about certain topics or information.

Securing ChatGPT

There is a way to opt out of sharing your data inputs with OpenAI for privacy. You can opt out of ChatGPT data collection in a few simple steps:

  1. Access the OpenAI Privacy Request Portal and click on Make a Privacy Request: https://privacy.openai.com/policies
  2. Type in the email address associated with your ChatGPT account
  3. Enter the Organization ID (This is going to be tricky!)
  4. Type in your Organization Name, which is found in your ChatGPT settings
  5. Solve the Captcha and the data opt-out form will...

What is responsible AI?

Responsible AI, also known as ethical AI or AI ethics, refers to the practice of developing, deploying, and using artificial intelligence systems in a way that prioritizes fairness, transparency, accountability, and ethical considerations. It encompasses a set of principles and guidelines to ensure that AI technologies are used responsibly and do not lead to harmful or biased outcomes. Here are some key aspects of responsible AI:

  1. Fairness: Responsible AI aims to minimize bias and discrimination in AI systems. It involves ensuring that AI algorithms and models do not unfairly favor or disadvantage particular groups of people based on factors such as race, gender, age, or socioeconomic status.
  2. Transparency: Transparency in AI means making the AI decision-making process understandable and interpretable. Users and stakeholders should have insights into how AI systems work, how they make decisions, and why they produce certain results.
  3. Accountability...

Secure AI framework (SAIF)

Google has created a Secure AI Framework that can be followed when securing your AI. It’s made up of six core elements:

  • Strengthen and extend robust cybersecurity foundations within the artificial intelligence ecosystem. Utilize established secure-by-default infrastructure safeguards to ensure the security of AI systems, their applications, and users. The same safeguards you use for DevOps infrastructure-as-code (IaC) with SAST, DAST, and OWASP testing should be extended to AI coding.
  • Ensure your AI models and code are vulnerability scanned and monitored once in production in the same way as any other software or cloud assets. This includes monitoring inputs into your AI system and having this included as part of your penetration tests.
  • Ensure your AI is included in your incident response plans and red teaming. For your annual penetration testing, ensure the AI assets and environment are included.
  • Establish platform-level controls...

AI and cybersecurity – The good, the bad, and the ugly

The rapid proliferation of AI demands a comprehensive examination of its impact on cybersecurity, encompassing both its potential benefits and drawbacks. Let’s begin with the positive aspects, highlighting how ML and AI have and will continue to contribute to the evolution of cybersecurity tools and products.

The good

ML has played a pivotal role in enhancing cybersecurity. AI presents an opportunity to propel cybersecurity tools forward by introducing sophisticated capabilities such as predictive analytics. This forward-looking approach enables the anticipation and pre-emptive mitigation of potential threats before they materialize. AI algorithms, adept at discerning patterns within extensive datasets, excel at identifying anomalies that may signify security breaches. Furthermore, AI empowers the automation of responses to these threats, rapidly deploying countermeasures without the need for human intervention...

AI bias

When developing AI models, developers must be aware of the inherent bias in not only humans, but in statistical and systematic data. NIST has met this head-on and has addressed this in the NIST SP 1270 titled Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

Figure 14.4 – AI bias<?AID 0004?>

Figure 14.4 – AI bias10

There are three types of bias:

  • Systematic
  • Statistical
  • Human

Systematic bias

Systematic biases are pervasive in historical, societal, and institutional norms. Systemic biases are ingrained imbalances that arise from the established protocols and customary practices within institutions, leading to the preferential treatment of certain societal groups while others face disadvantages or undervaluation. These biases can exist without deliberate intent to discriminate; they often manifest simply because the majority adheres to longstanding rules or norms. Examples of such systemic biases...

NIST AI RMF

NIST has created an AI Risk Management Framework (RMF). It focuses on the development of the AI model from conceptualization to deployment. A diagram that explains NIST RMF can be seen in Figure 14.5:

Figure 14.5 – NIST AI RMF<?AID 0004?>

Figure 14.5 – NIST AI RMF12

The NIST AI Risk Management Framework covers the development cycle of creating and maintaining an AI system. Most importantly, it shows the importance of human oversight, transparency, and maintaining governance. As I explained earlier, the other compliance your company meets is important as a firm foundation. AI compliance needs to be incorporated into your regular risk management plans and processes. If you are planning on deploying an AI system at your company, then you need an AI policy. From the conception of an AI system through to its deployment and maintenance...

Summary

This chapter discussed cyber resilience in the age of artificial intelligence (AI) and addressed various concerns related to AI in cybersecurity. It highlights both the positive and negative aspects of AI’s impact on cybersecurity.

The positive aspects include how machine learning (ML) and AI can enhance cybersecurity tools and products by introducing capabilities such as predictive analytics, pattern recognition, and automated threat response. AI can also help improve threat analysis and reduce false positives, enhancing the efficiency of cybersecurity efforts.

However, the negative aspects involve the risks associated with widespread AI use. These risks include the potential misuse of AI for hacking, data poisoning, and privacy concerns. It’s essential to implement guardrails for AI input ingestion, validate data, and maintain human oversight from development to training and ongoing monitoring to prevent model poisoning.

Responsible AI development,...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
A CISO Guide to Cyber Resilience
Published in: Apr 2024Publisher: PacktISBN-13: 9781835466926
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Debra Baker

Debra Baker has 30 years of experience in Information Security. As CEO of TrustedCISO, Debra provides strategic cybersecurity CISO Advisory Services. She has an AI first startup aiming to power through the pain of Third Party Vendor Assessment and Compliance. Previously, Debra was CISO at RedSeal where she led the security program successfully getting SOC2 Type 2. Previously, she served as Regulatory Compliance Manager at Cisco. While at Cisco she founded the cryptographic knowledge base, CryptoDoneRight in collaboration with Johns Hopkins University. Debra was named one of the top 100 Women in Cybersecurity, "Women Know Cyber: 100 Fascinating Females Fighting Cybercrime."
Read more about Debra Baker