Reader small image

You're reading from  Responsible AI in the Enterprise

Product typeBook
Published inJul 2023
PublisherPackt
ISBN-139781803230528
Edition1st Edition
Right arrow
Authors (2):
Adnan Masood
Adnan Masood
author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

Heather Dawe
Heather Dawe
author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe

View More author details
Right arrow

Preface

As practicing data scientists, we have seen first-hand how AI models play a significant role in various aspects of our lives. However, as the cliche goes, with this power comes the responsibility to ensure that these decision-making systems are fair, transparent, and trustworthy. That’s why I, along with my colleague, decided to write this book.

We have observed that many companies face challenges when it comes to the governance and auditing of machine learning systems. One major issue is bias, which can lead to unfair outcomes. Another issue is the lack of interpretability, making it difficult to know whether the models are functioning correctly. Finally, there’s the challenge of explaining AI decisions to humans, which can lead to a lack of trust in these systems.

Controlling frameworks and standards (in the form of government regulation, ISO standards, and similar) for AI that ensure it is fair, ethical, and fit for the purpose of its application are still in their nascent form and have only started to become available within the past few years. This can be viewed as surprising given AI’s growing ubiquity in our lives. As these frameworks become published and used, AI assurance will itself mature and hopefully become as ubiquitous as AI. Until then, we hope this book fills the gaps that data professionals within the enterprise are facing as they seek to ensure the AI they develop and use is fair, ethical, and fit for purpose.

With these challenges and intentions in mind, we aimed to write a book that fits the following criteria:

  • Does not repeat information that is already widely available
  • Is accessible to business and subject-matter experts who are interested in learning about explainable and interpretable AI
  • Provides practical guidance, including checklists and resources, to help companies get started with explainable AI

We’ve kept the technical language to a minimum and made the book easy to understand so that it can be used as a resource for professionals at all levels of experience.

As AI continues to evolve, it’s important for companies to have a clear understanding of how these systems work and to be able to explain their algorithmic value propositions. This is not just a matter of complying with regulations but also about building trust with customers and stakeholders.

This book is for business stakeholders, technical leaders, regulators, and anyone interested in the responsible use of AI. We cover a range of topics, including explainable AI, algorithmic bias, trust in AI systems, and the use of various tools for fairness assessment and bias mitigation. We also discuss the role of model monitoring and governance in ensuring the reliability and transparency of AI systems.

Given the increasing importance of responsible AI practices, this book is particularly relevant in light of current AI standards and guidelines, such as the EU’s GDPR, the AI Now Institute’s Algorithmic Impact Assessment, and the Partnership on AI’s Principles for Responsible AI. Our hope is that by exploring these critical issues and sharing best practices, we can help you understand the importance of responsible AI and inspire you to take action to ensure that AI is used for the betterment of all.

  1. Exploring the Landscape of Explainable AI and Bias: Chapters 1 and 2 provide an introduction to Explainable AI (XAI), which is a crucial component in the development and deployment of AI models. This section provides a comprehensive overview of the XAI landscape, its importance, and the challenges that it poses. The section starts with a primer on XAI and ethical AI for model risk management, providing definitions and concepts that you will need to understand for the rest of the section. Next, you will be presented with several harrowing tales of AI gone bad, highlighting the dangers of unexplainable and biased AI. These stories illustrate the importance of XAI and the need for different approaches to be taken to address similar problems. Chapter 2, Algorithms Gone Wild, takes a closer look at bias and the impact it has on AI models. The chapter explores the different types of bias that can be introduced into models and the impact they have on the outcomes produced. By the end of this introduction, you will have a deeper understanding of XAI and the challenges it poses, as well as a greater appreciation for the importance of ethical AI and the need to address bias in AI models.
  2. Exploring Explainability, Risk Observability, and Model Governance: Chapters 3 to 6 delve into the topics of explainability, risk observability, and model governance, particularly in the context of cloud computing platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud. It covers several important areas, including model interpretability approaches, measuring and monitoring model drift, audit and compliance standards, enterprise starter kit for fairness, accountability, and transparency, as well as bias removal, model robustness, and adversarial attacks. These topics are discussed in detail across several chapters to provide you with a comprehensive understanding of these important concepts.
  3. Applied Explainable AI: Real-world Scenarios and Case Studies: Chapters 7 to 10 of the final section delves into the practical application of explainable AI and the challenges of deploying trustworthy and interpretable models in the enterprise. Real-world case studies and usage scenarios are presented to illustrate the need for safe, ethical, and explainable machine learning, and provide solutions to problems encountered in various domains. The chapters in this section explore code examples, toolkits, and solutions offered by cloud platforms such as AWS, GCP, and Azure, Microsoft’s FairLearn framework, and Azure OpenAI Large Language Models (LLMs) such as GPT-3, GPT-4, and ChatGPT. Specific topics covered in this section include interpretability toolkits, fairness measures, fairness in AI systems, and bias mitigation strategies. We will also review a real-world implementation of GPT3, along with recommendations and guidelines for using LLMs in a safe and responsible manner.

Who this book is for

As we continue to work with enterprises, advising and guiding them as they seek to transform themselves to become data-driven – producing their own actionable insights, machine-learning models, and AI at scale – we are acutely aware of their concerns and questions regarding AI assurance.

This book is written for a wide range of professionals in the field of enterprise AI and machine learning. This includes data scientists, machine learning engineers, AI practitioners, IT professionals, business stakeholders, software engineers, AI ethicists, and, last but not least, enterprise change leaders. These are the people working within the enterprise to both affect the changes required to become data-driven and to successfully develop and deliver AI models at scale.

The book covers a comprehensive range of topics, from XAI and ethical considerations to model governance and compliance standards, and provides practical guidance on using tools such as hyperscalers, open source tools, and Microsoft Fairlearn. It is a valuable resource for those who are interested in understanding the latest developments in AI governance, including the role of internal AI boards, the importance of data governance, and the latest industry standards and regulations.

The book is also relevant for AI professionals in a variety of industries, including healthcare, customer service, and finance, using conversational AI and predictive analytics. Whether you are a business stakeholder responsible for making decisions about AI adoption, an AI ethicist concerned with the ethical implications of AI, or an AI practitioner responsible for building and deploying models, this book provides valuable insights and practical guidance on building responsible and transparent AI models.

Essential chapters tailored to distinct AI-related positions

For AI ethicists, auditors, and compliance personnel, the most relevant chapters are as follows:

  • Chapter 1, A Explainable and Ethical AI Primer
  • Chapter 5, Model Governance, Audit, and Compliance
  • Chapter 6, Enterprise Starter Kit for Fairness, Accountability, and Transparency
  • Chapter 10, Foundational Models and Azure OpenAI

These chapters focus on explainable and ethical AI, model governance, compliance standards, responsible AI implementation, and the challenges associated with large language models.

Managers and business stakeholders will find the following chapters most relevant:

  • Chapter 2, Algorithms Gone Wild
  • Chapter 5, Model Governance, Audit, and Compliance
  • Chapter 6, Enterprise Starter Kit for Fairness, Accountability, and Transparency

These chapters cover the impact of bias in AI, the importance of transparency and accountability in AI-driven decision-making, and the practical aspects of implementing AI governance within an organization.

Data scientists and machine learning engineers will find the entire book quite useful, but the most relevant chapters for data scientists and machine learning engineers are as follows:

  • Chapter 1, Explainable and Ethical AI Primer
  • Chapter 3, Opening the Algorithmic Black Box
  • Chapter 4, Robust ML - Monitoring and Management
  • Chapter 7, Interpretability Toolkits and Fairness Measures - AWS, GCP, Azure, and AIF 360
  • Chapter 8, Fairness in AI Systems with Microsoft Fairlearn
  • Chapter 9, Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox

These chapters provide valuable information on explainable and ethical AI, model interpretability, monitoring model performance, and practical applications of fairness and bias mitigation techniques.

While the book covers advanced-level concepts, it is written in an accessible style and assumes a basic understanding of AI and machine learning concepts. However, those with less experience may need to put in additional effort to fully understand the material.

What this book covers

This book is a comprehensive guide to responsible AI and machine learning model governance. It covers a broad range of topics including XAI, ethical AI, bias in AI systems, model interpretability, model governance and compliance, fairness and accountability in AI, data governance, and ethical AI education and upskilling. This book provides practical insight into using tools such as Microsoft FairLearn for fairness assessment and bias mitigation. It is a must-read for data scientists, machine learning engineers, AI practitioners, IT professionals, business stakeholders, and AI ethicists who are responsible for implementing AI models in their organizations. The content is presented in an easy-to-understand style, making it a valuable resource for professionals at all levels of expertise.

Chapter 1, Explainable and Ethical AI Primer, provides a comprehensive understanding of key concepts related to explainable and interpretable AI. You will become familiar with the terminology of safe, ethical, explainable, robust, transparent, auditable, and interpretable machine learning. This chapter serves as a solid foundation for novices as well as a reference for experienced machine learning practitioners. It starts with a discussion of the machine learning development life cycle and outlines the taxonomy of interpretable AI and model risk observability, providing a complete overview of the field.

Chapter 2, Algorithms Gone Wild, covers the current limitations and challenges of AI and how it can contribute to the amplification of existing biases. Despite these challenges, the chapter highlights the increasing use of AI and provides an overview of its various applications, including AI horror stories and cases of discrimination, bias, disinformation, fakes, social credit systems, surveillance, and scams. This chapter serves as a platform for discussion, bringing together the different uses of AI and offering a space for you to reflect on the potential consequences of its use. By the end of this chapter, you will have a deeper appreciation for the complex and nuanced nature of AI and the importance of considering its ethical and social implications.

Chapter 3, Opening the Algorithmic Black Box, teaches you about the field of XAI and its challenges, including a lack of formality and poorly defined definitions. The chapter provides an overview of four major categories of interpretability methods, which allow for a multi-perspective comparison of these methods. The purpose of this chapter is to explain black-box models and create white-box models, to ensure fairness and restrict discrimination, and to analyze the sensitivity of model predictions. The chapter will also show how to explain black-box models with white-box models and provide an understanding of the differential value proposition and approaches used in each of these libraries. By the end of this chapter, you will have a comprehensive understanding of the challenges and opportunities in the field of XAI, and the various interpretability methods available for creating more transparent and explainable machine learning models.

Chapter 4, Robust ML - Monitoring and Management, talks about the importance of ongoing validation and monitoring as an integral part of the model development life cycle. The chapter focuses on the process of model performance monitoring, beginning with quantifying the degradation of a model. You will learn about identifying the parameters to track the model’s performance and defining the thresholds that should raise an alert. The chapter focuses on the essential components of model performance monitoring, including maintaining the business purpose of a model and detecting drifts in its direction during and after deployment. You will learn how to leverage various techniques as part of model monitoring and build a process for detecting, alerting, and addressing drifts. The chapter aims to demonstrate the importance of automated monitoring of a model running in production, providing comprehensive measures for data drift monitoring, model concept drift monitoring, statistical performance monitoring, ethical fairness monitoring, business scenario simulation, what-if analysis, and comparing production parameters such as parallel model execution and custom metrics. By the end of this chapter, you will have a comprehensive understanding of the importance of ongoing validation and monitoring in the model development life cycle and the techniques for detecting and addressing drifts in a model’s performance.

Chapter 5, Model Governance, Audit, and Compliance, explores the predictive power of machine learning algorithms and their ability to take in vast amounts of data from a variety of sources. The chapter focuses on the governance aspect of these models, as there is growing concern about the lack of transparency in AI-driven decision-making processes. You will review various regulatory initiatives, including those by the United States Financial Services Commission and the U.S. Federal Trade Commission, concerning AI and machine learning. The chapter will cover different audit and compliance standards and the rapidly evolving regulation of AI, given its potential impact on people’s lives, livelihoods, healthcare, and financial systems. You will understand the importance of auditability in AI models with production traceability, including the availability of immutable snapshots of models for long-term auditability, along with their source code, metadata, and other associated artifacts. By the end of this chapter, you will have a comprehensive understanding of the governance aspect of machine learning models and the importance of ensuring transparency and accountability in AI-driven decision-making processes.

Chapter 6, Enterprise Starter Kit for Fairness, Accountability, and Transparency, demonstrates the importance of putting ethical AI principles into action as organizations adopt AI. The chapter provides a practical approach to using AI and appropriate tools to ensure AI fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. You will gain an understanding of how trust, fairness, and comprehensibility are the keys to responsible and accountable AI and how AI governance can be achieved in an enterprise setting with supporting tools. The chapter provides a walk-through of the implementation of bias mitigation and fairness, explainability, trust and transparency, and privacy and regulatory compliance within an organization. You will also review the variety of tools available for XAI, including the TensorBoard Projector, What-If Tool, Aequitas, AI Fairness 360, AI Explainability 360, ELI5, explainerdashboard, Fairlearn, interpret, Scikit-Fairness, InterpretML, tf-explain, XAI, AWS Clarify, and Vertex Explainable AI. By the end of this chapter, you will have a comprehensive understanding of how to use AI governance tools to ensure the responsible and accountable use of AI in an enterprise setting.

Chapter 7, Interpretability Toolkits and Fairness Measures - AWS, GCP, Azure, and AIF 360, showcases the use of interpretability toolkits and cloud AI providers’ offerings to identify and limit bias and explain predictions in machine learning models. The chapter will provide an overview of the open source and cloud-based interpretability toolkits available, including IBM’s AIF360, Amazon SageMaker’s Clarify, Google’s Vertex Explainable AI, and Model Interpretability in Azure Machine Learning. You will gain a deeper understanding of the variety of tools available for explainable AI and the benefits they provide in terms of greater visibility into training data and models. By the end of this chapter, you will have a comprehensive understanding of the role of interpretability toolkits in ensuring the fairness and transparency of machine learning models.

Chapter 8, Fairness in AI Systems with Microsoft Fairlearn, talks about Microsoft FairLearn, an open source fairness toolkit for AI. The chapter will provide an overview of the toolkit and its capabilities, including its use as a guide for data scientists to better understand fairness issues in AI. You will learn about the two components of Fairlearn Python, including metrics for assessing when groups are negatively impacted by a model and metrics for comparing multiple models. The chapter will cover the assessment of fairness using allocation harm and quality of service harm, as well as the mitigation of unfairness and approaches for improving an unfair model. By the end of this chapter, you will have a comprehensive understanding of Microsoft Fairlearn and its role in ensuring the fair and ethical use of machine learning models.

Chapter 9, Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox, explores the practical application of Fairlearn in real-world scenarios. The chapter covers the evaluation of fairness-related metrics and techniques for mitigating bias and disparity using Fairlearn. You will also learn about the Responsible AI Toolbox, which provides a collection of model and data exploration and assessment user interfaces and libraries for a better understanding of AI systems.

The chapter will introduce the Responsible AI Dashboard, Error Analysis Dashboard, Interpretability Dashboard, and Fairness Dashboard and how they can be used to identify model errors, diagnose why those errors are happening, understand model predictions, and assess the fairness of the model. By the end of this chapter, you will have a comprehensive understanding of how to use the Responsible AI Toolbox and Fairlearn to ensure the fair and ethical use of machine learning models in your own work.

Chapter 10, Foundational Models and Azure OpenAI, demonstrates the practical use cases of governance when it comes to LLMs – in this case, the API offerings of OpenAI and Azure OpenAI. The chapter covers the implementation of LLMs, such as GPT-3, which can be used for a variety of business use cases, and delves into the challenges associated with governing LLMs, such as data privacy and security. While these models can enhance the functionality of enterprise applications, they also pose significant challenges in terms of governance. The chapter highlights the importance of AI governance for the ethical and responsible use of LLMs and the need for bias remediation techniques to ensure that AI solutions are fair and unbiased. Additionally, we will discuss the data privacy and security measures provided by Azure OpenAI and the significance of establishing an AI governance framework for enterprise use of these tools.

To get the most out of this book

To get the most out of this book, it is important to understand the context and target audience. This book is focused on responsible AI and machine learning model governance, providing in-depth coverage of key concepts such as explainable and ethical AI, bias in AI systems, model interpretability, model governance and compliance, fairness and accountability in AI, data governance, upskilling, and education for ethical AI. The target audience includes data scientists, machine learning engineers, AI practitioners, IT professionals, business stakeholders, and AI ethicists who are responsible for building and deploying AI models in their organizations.

To maximize the benefits of this book, you should have a basic understanding of machine learning and AI. It is recommended to read the chapters in order to build a comprehensive understanding of the topics covered. Additionally, the hands-on examples and practical guidance provided in the book can be applied to real-world situations and can be used as a reference for future projects.

We sincerely hope you enjoy reading this book as much as we enjoyed writing it.

Software/hardware covered in the book

Operating system requirements

Jupyter Notebook (Python 3.x)

Windows, macOS, or Linux

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

This book is filled with references to the classic science fiction novel, The Hitchhiker’s Guide to the Galaxy, one of my favorite books of all time. So, excuse the puns and whimsical language as I pay homage to the humor and creativity of Douglas Adams. May this book guide you on your own journey through the world of AI and machine learning, just as the Guide guided Arthur Dent on his interstellar adventures.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Responsible-AI-in-the-Enterprise. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “All the created cohorts are stored in the cohort_list list, which is passed as an argument to the ResponsibleAIDashboard function.”

A block of code is set as follows:

{const set = function(...items) {
       this.arr	= [...items];
       this.add = {function}(item) {
       if( this._arr.includes(item) ) {
           return false; }

Any command-line input or output is written as follows:

pip install data-drift-detector

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “For reference, we used a Standard DS12_v2 compute resource for this exercise, and it worked fine.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at customercare@packtpub.com and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright@packt.com with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Responsible AI in the Enterprise, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

  1. Scan the QR code or visit the link below

https://packt.link/free-ebook/978-1-80323-052-8

  1. Submit your proof of purchase
  2. That’s it! We’ll send your free PDF and other benefits to your email directly
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Responsible AI in the Enterprise
Published in: Jul 2023Publisher: PacktISBN-13: 9781803230528
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe