Reader small image

You're reading from  Responsible AI in the Enterprise

Product typeBook
Published inJul 2023
PublisherPackt
ISBN-139781803230528
Edition1st Edition
Right arrow
Authors (2):
Adnan Masood
Adnan Masood
author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

Heather Dawe
Heather Dawe
author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe

View More author details
Right arrow

Model Governance, Audit, and Compliance

“In this era of profound digital transformation, it’s important to remember that business, as well as government, has a role to play in creating shared prosperity – not just prosperity. After all, the same technologies that can be used to concentrate wealth and power can also be used to distribute it more widely and empower more people.”

– Erik Brynjolfsson, director of the MIT initiative on the digital economy

“Some cultures embrace privacy as the highest priority part of their culture. That’s why the U.S., Germany, and China may be at different levels in the spectrum. But I also believe fundamentally that every user does not want his or her data to be leaked or used to hurt himself or herself. I think GDPR is a very good first step, even though I might disagree with the way it was implemented and the effect it has on companies. I think governments should put a stake in the ground and say...

Policies and regulations

In this section, we will review the national policies and regulations pertaining to AI in various countries and regions. It is important to note the nuances, in similarities as well as differences, in these policies, since AI has a wide-ranging global impact.

United States

The United States (.U.S.) currently lacks a comprehensive regulation for AI at a national (federal) level. There have been a few different initiatives in the pipeline, including the Algorithmic Accountability Act, which aims to address the issues surrounding AI bias and discrimination. One notable effort is the National Institute of Standards and Technology (NIST) initiative called Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. This initiative seeks to develop a framework to assess and mitigate biases in AI systems, focusing on transparency, explainability, and fairness. In the absence of an all-encompassing national standard to govern AI models, states...

Professional bodies and industry standards

Professional bodies for computer and information sciences have provided their own code of conduct and standards for AI. Here is a brief overview of these standards.

Microsoft’s Responsible AI framework

The Microsoft Responsible AI Standard, v219, is a comprehensive framework designed to guide the development, deployment, and maintenance of AI systems in an ethical, reliable, and inclusive manner. It encompasses a broad range of goals and requirements addressing critical aspects, such as system reliability and safety, ongoing monitoring, feedback and evaluation, privacy, security, and inclusiveness. The standard emphasizes the importance of conducting thorough impact assessments, adhering to transparency, and incorporating guidelines for human-AI interactions to mitigate potential risks and failures. Furthermore, the framework entails regular evaluations, documentation updates, and collaboration with the Office of Responsible AI...

Technology toolkits

Along with guidance documents and PowerPoint, enterprises need toolkits that can actually parse the datasets, models, and code to identify the underlying biases and provide practical ways to address these concerns. The following subsections explain some such tools and libraries that offer these capabilities.

Microsoft Fairlearn

Microsoft Fairlearn24 is an open source Python library to assess and improve the fairness of ML models, and it has a wide range of algorithms to compare and mitigate bias in predictive models, as well as visualization tools to explore and analyze model performance. Fairlearn is designed to help data scientists and developers build more equitable and inclusive ML models by providing them with the tools to measure and address unfairness in their models. The library is part of Microsoft’s RAI efforts and is freely available for use by anyone.

Figure 5.4: The Fairlearn toolkit

Figure 5.4: The Fairlearn toolkit

In Chapters 8 and 9, we...

Auditing checklists and measures

Along with compliance standards and code reviews, quantifying the results for model bias is a critical step in building accountable ML systems. In this section, we will provide a list of some of these checklists and measures.

Datasheets for datasets

Datasheets for datasets is an initiative aimed at improving transparency, accountability, and an understanding of the datasets used in the development and training of ML models. Introduced by Timnit Gebru, an AI ethicist and former co-leader of Google’s ethical AI team, as well as Kate Crawford and others, this initiative proposes using a standard way to report datasets, which its creators refer to as datasheets33. Their rationale was inspired by the electronics industry, where datasheets provide important information about the components being used:

“In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating...

Summary

In this chapter, we reviewed the current standards landscape. You saw how different countries, professional bodies, and organizations implement best practices, governance, regulations, and policies around automated decision management systems. We provided an overview of national policies and regulations, attempts from professional bodies to establish industry standards, the contemporary landscape of technology toolkits, and auditing checklist metrics.

In many ways, the sheer number of different standards, regulatory frameworks, and guides for best practice is daunting. This is perhaps particularly true for those enterprise leaders who are non-technical regarding data science and ML, but who are seeking to lead their businesses to achieve the benefits that AI-driven service improvement can facilitate. This is one of the main reasons we wrote this book! We seek to demystify these assurance processes. At the core of all the frameworks and starter kits outlined previously is...

References and further reading

  1. NIST Special Publication 1270 Towards a Standard for Identifying and Managing Bias in AI: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf.
  2. H.R.2231 – Algorithmic Accountability Act of 2019: https://www.congress.gov/bill/116th-congress/house-bill/2231.
  3. CCPA: https://oag.ca.gov/privacy/ccpa.
  4. How Much Does Racial Bias Affect Mortgage Lending? Evidence from Human and Algorithmic Credit Decisions: https://papers.ssrn.com/sol3/papers.cfm?abstractid=3887663.
  5. Consumer Financial Protection Circular 2022–03: https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms/.
  6. SB 1392 Consumer Data Protection Act; establishes a framework for controlling and processing personal data: https://lis.virginia.gov/cgi-bin/legp604.exe?211+sum+SB1392.
  7. Ethical AI Toolkit: https://ethicstoolkit.ai/.
  8. ...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Responsible AI in the Enterprise
Published in: Jul 2023Publisher: PacktISBN-13: 9781803230528
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe