Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Responsible AI in the Enterprise

You're reading from  Responsible AI in the Enterprise

Product type Book
Published in Jul 2023
Publisher Packt
ISBN-13 9781803230528
Pages 318 pages
Edition 1st Edition
Languages
Authors (2):
Adnan Masood Adnan Masood
Profile icon Adnan Masood
Heather Dawe Heather Dawe
Profile icon Heather Dawe
View More author details

Table of Contents (16) Chapters

Preface 1. Part 1: Bigot in the Machine – A Primer
2. Chapter 1: Explainable and Ethical AI Primer 3. Chapter 2: Algorithms Gone Wild 4. Part 2: Enterprise Risk Observability Model Governance
5. Chapter 3: Opening the Algorithmic Black Box 6. Chapter 4: Robust ML – Monitoring and Management 7. Chapter 5: Model Governance, Audit, and Compliance 8. Chapter 6: Enterprise Starter Kit for Fairness, Accountability, and Transparency 9. Part 3: Explainable AI in Action
10. Chapter 7: Interpretability Toolkits and Fairness Measures – AWS, GCP, Azure, and AIF 360 11. Chapter 8: Fairness in AI Systems with Microsoft Fairlearn 12. Chapter 9: Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox 13. Chapter 10: Foundational Models and Azure OpenAI 14. Index 15. Other Books You May Enjoy

Enterprise risk management and governance

In this section, we will discuss how the monitoring and management of risk associated with AI should be recognized as one part of an enterprise’s risk management and governance framework.

Given the relative youth of the use of AI within a business (compared to, say, offices, computers, and data warehouses), the risk management of AI is not necessarily an established process for many enterprises. While regulated business sectors such as financial services and healthcare will be familiar with ensuring their machine learning models adhere to a regulator’s rules, this will not be the case for other enterprises in other, currently unregulated, business areas.

Enterprise risk governance is a critical process that involves identifying, assessing, and managing risks throughout an organization or enterprise. It requires implementing effective policies, procedures, and controls to mitigate risks and ensure that the organization operates in a safe, secure, and compliant manner.

The primary objective of enterprise risk governance is to enable an organization to develop a comprehensive understanding of its risks and manage them effectively. This encompasses identifying and assessing risks related to the organization’s strategic objectives, financial performance, operations, reputation, and compliance obligations. Establishing a risk management framework is a typical approach to enterprise risk governance, which involves developing policies and procedures for risk identification, assessment, and mitigation. It also involves assigning responsibility for risk management to specific individuals or teams within the organization.

To maintain effective enterprise risk governance, the ongoing monitoring and evaluation of risk management practices are necessary. This ensures that an organization can respond to emerging risks promptly and efficiently. Furthermore, regular reporting to stakeholders such as executives, board members, and regulators is vital to ensure they are informed about the organization’s risk profile and risk management activities.

Tools for enterprise risk governance

There are several enterprise risk governance frameworks and tools available to help organizations implement effective risk management practices. One commonly used framework is the ISO 31000:2018 standard, which provides guidelines for risk management principles, frameworks, and processes. Other frameworks include COSO’s ERM (Enterprise Risk Management) and the NIST Cybersecurity Framework. There is also COBIT (Control Objectives for Information and Related Technology), ITIL (Information Technology Infrastructure Library), and PMBOK (Project Management Body of Knowledge), which provide guidance to manage risks related to information technology, service management, and project management, respectively.

Risk management tools, such as risk registers, risk heat maps, and risk scoring models, can also be used to help organizations identify and assess risks. These tools can help prioritize risks based on their likelihood and potential impact, enabling organizations to develop appropriate risk mitigation strategies.

Technology solutions, such as GRC (governance, risk, and compliance) platforms, can also aid in enterprise risk governance by providing a centralized system to manage risks and ensure compliance with relevant regulations and standards. AI-powered risk management tools are also becoming increasingly popular, as they can help organizations identify and mitigate risks more efficiently and effectively.

AI risk governance in the enterprise

Within an enterprise, AI risk governance is the set of processes that ensures the use of AI does not have a detrimental impact on the business in any way. There are a significant number of ways this could happen, including the following:

  • Ensuring AI used in selection processes such as automated sifting of job candidates within HR is unbiased and does so without any kind of prejudice
  • Automated defect monitoring of a manufacturing process in a tire factory does not accept defective tire walls (or, conversely, reject sufficient tire walls) due to drift in the underlying ML model
  • Credit is refused to an applicant of a loan company, as a credit-risk model inappropriately rejects on the grounds of their employment type

These are just three examples; there are many more. Such adverse outcomes can potentially cause harm to a business, its customers, and other stakeholders, and at the very least, it can have a reputational impact on the business.

Enterprise risk management is all about managing the risks (ideally, before they become issues) in order to yield business benefits, and AI is no different. AI risk governance is a crucial process that involves managing and mitigating the risks that arise from the development and deployment of AI models within an organization or enterprise. Although the use of AI technologies in business processes can result in significant benefits, it can also introduce new risks and challenges that require prompt attention.

Effective enterprise AI risk governance entails identifying and assessing potential risks associated with the use of AI, including data privacy concerns, algorithmic bias, cybersecurity threats, and legal and regulatory compliance issues. Furthermore, it involves implementing policies, procedures, and technical safeguards to manage these risks, such as model explainability and transparency, data governance, and robust testing and validation processes.

By adopting a sound enterprise AI risk governance strategy, organizations can ensure that their AI technologies are deployed safely and responsibly. Such governance practices ensure that AI models are transparent, auditable, and accountable, and that they do not introduce unintended harm to individuals or society. Additionally, effective governance strategies help organizations to build trust in their AI systems, minimize reputational risks, and maximize the potential of AI technologies in their operations.

Perpetuating bias – the network effect

Bias exists in human decision-making, so why is it so bad if algorithms take this bias and reflect it in their decisions?

The answer lies in amplification through the network effect. Think bigot in the cloud!

An unfair society inevitably yields unfair models. As much as we like to think we are fair and free of subconscious judgments, we as humans are prone to negative (and positive) implicit bias, stereotyping, and prejudice. Implicit (unconscious) bias is not intentional, but it can still impact how we judge others based on a variety of factors, including gender, race, religion, culture, language, and sexual orientation. Now, imagine this as part of a web-based API – a service offered in the spirit of democratization of AI – on a popular machine learning acceleration platform to speed up development, with this bias proliferated across multiple geographies and demographics! Bias in the cloud is a serious concern.

Figure 1.2: A list of implicit biases

Figure 1.2: A list of implicit biases

Blaming this unfairness on society is one way to handle this (albeit not a very good one!) but considering the risk of perpetuating biases in algorithms that may outlive us all, we must strive to eliminate these biases without compromising prediction accuracy. By examining today’s data on Fortune 100 CEOs’ profiles, we can see that merely reinforcing biases based on features such as gender and race could lead to erroneous judgments, overlooked talent, and potential discrimination. For instance, if we have historically declined loans to minorities and people of color, using a dataset built on these prejudiced and bigoted practices to create a model will only serve to reinforce and perpetuate such unfair behavior.

On top of that, we miss a great opportunity – to address our own biases before we codify that in perpetuity.

The problem with delegating our decisions to machines with our biases intact is that it would lead to having these algorithms perpetuate the notion of gender, affinity, attribution, conformity, confirmation, and a halo and horn effect, and affirmation leads to reinforcing our collective stereotypes. Today, when algorithms act as the first line of triage, minorities have to “whiten” job résumés (see Minorities Who “Whiten” Job Resumes Get More InterviewsHarvard Business Review 14) to get more interviews. Breaking this cycle of bias amplification and correcting the network effect in a fair and ethical manner is one of the greatest challenges of our digital times.

Transparency versus black-box apologetics – advocating for AI explainability

We like to think transparency and interpretability are good – it seems very logical to assume that if we can understand what algorithms are doing, it helps us troubleshoot, debug, measure, improve, and build upon them easily. With all the virtues described previously, you would imagine interpretability is a no-brainer. Surprise! It is not without its critics. Explainable and uninterpretable AI are two opposing viewpoints in the field of AI. Proponents of explainable AI argue that it enhances transparency, trustworthiness, and regulatory compliance. In contrast, supporters of uninterpretable AI maintain that it can lead to better performance in complex and opaque systems, while also protecting intellectual property. It’s interesting to see how not everyone is a big fan of it, including some of the greatest minds of our times, such as Turing Award winners Yoshua Bengio and Yann LeCun.

This important argument was the centerpiece in the first-ever great debate 15 at a NeurIPS conference, where Rich Caruana and Patrice Simard argued in favor of it, while Kilian Weinberger and Yann LeCun were against it. The debate reflects the ongoing discussion in the machine learning community regarding the trade-off between performance and interpretability.

Researchers and practitioners who consider black-box AI models as acceptable often emphasize the performance benefits of these models, which have demonstrated state-of-the-art results in various complex tasks. They argue that the high accuracy achieved by black-box models can outweigh the need for interpretability, particularly when solving intricate problems. Proponents also contend that real-world complexity necessitates embracing the intricacy of black-box models to capture the nuances of the problem at hand. They assert that domain experts can still validate the model’s output and use their expertise to determine whether the model’s predictions are reasonable, even if the model itself is not fully interpretable.

Conversely, critics tell the joke, “Why did the black-box AI cross the road? Nobody knows, as it won’t explain itself!”

But seriously, we should emphasize the importance of ethics and fairness, as a lack of interpretability may lead to unintended biases and discrimination, undermining trust in the AI system. We should also stress the importance of accountability and transparency, as it is crucial for users and stakeholders to understand the decision-making process and factors influencing a model’s output. We would like to argue that model interpretability is vital to debug and improve models, as identifying and correcting issues in black-box models can be challenging. Regulatory compliance often requires a level of interpretability to ensure that AI systems abide by legal requirements and ethical guidelines, which would be virtually impossible if a model couldn’t explain itself.

In a Wired interview titled Google’s AI Guru Wants Computers to Think More Like Brains 16, Turing Award winner and father of modern neural networks, Geoff Hinton stated the following:

“I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster.”

This is a fairly strong statement that was met with a rebuttal in an article 17 in which the counterargument focused on what was best for humanity and what it means for society. The way we see it, there is room for both. In In defense of blackbox models, Holm 18 states the following:

“...we cannot use blackbox AI to find causation, systemization, or understanding and these questions remain in purview of human intelligence. On the contrary, blackbox methods can contribute substantively and productively to science, technology, engineering, and math.”

For most practitioners, the goal is to strike a balance between transparency and performance that satisfies the needs of various stakeholders, including users, regulators, and developers. The debate continues, with different researchers offering diverse perspectives based on their fields of expertise and research focus.

As professionals in the field of machine learning, we emphasize the importance of transparent, interpretable, and explainable outcomes to ensure their reliability. Consequently, we are hesitant to rely on “black-box” models that offer no insight into their decision-making processes. Although some argue that accuracy and performance are sufficient to establish trust in AI systems, we maintain that interpretability is crucial. We recognize the ongoing debate regarding the role of interpretability in machine learning, but it is essential to note that our position favors interpretability over a singular focus on outcomes – your mileage may vary (YMMV) 19.

The AI alignment problem

The AI alignment problem has become increasingly relevant in recent years due to the rapid advancements in AI and its growing influence on various aspects of society. This problem refers to the challenge of designing AI systems that align with human values, goals, and ethics, ensuring that these systems act in the best interests of humanity.

One reason for the increasing popularity of the AI alignment problem is the potential for AI systems to make high-stakes decisions, which may involve trade-offs and ethical dilemmas. A classic example is the trolley problem, where an AI-controlled vehicle must choose between two undesirable outcomes, such as saving a group of pedestrians at the cost of harming its passengers. This ethical dilemma highlights the complexity of aligning AI systems with human values and raises questions about the responsibility and accountability of AI-driven decisions.

In addition to this, there are a few other significant challenges to AI alignment – containment and the do anything now (DAN) problem. The containment problem refers to the challenge of ensuring that an AI system does not cause unintended harm or escape from its intended environment. This problem is particularly important when dealing with AI systems that have the potential to cause significant harm, such as military or medical AI systems. The DAN problem, on the other hand, refers to the challenge of ensuring that an AI system does not take actions that are harmful to humans or society, even if those actions align with the system’s goals. For example, the paperclip problem is a thought experiment that illustrates this problem.

In this scenario, an AI system is designed to maximize the production of paperclips. The system becomes so focused on this goal that it converts all matter on Earth, including humans, into paperclips. The reward hacking problem occurs when an AI system finds a way to achieve its goals that does not align with human values. The corrigibility problem relates to ensuring that an AI system can be modified or shut down if it becomes harmful or deviates from its intended behavior. This superintelligence control problem involves ensuring that advanced AI systems with the potential for superintelligence are aligned with human values and can be controlled if they become a threat.

Addressing these challenges and other AI alignment-related problems is crucial to ensure the safe and responsible development of AI systems, promote their beneficial applications, and prevent unintended harm to individuals and society.

You have been reading a chapter from
Responsible AI in the Enterprise
Published in: Jul 2023 Publisher: Packt ISBN-13: 9781803230528
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}