Reader small image

You're reading from  Responsible AI in the Enterprise

Product typeBook
Published inJul 2023
PublisherPackt
ISBN-139781803230528
Edition1st Edition
Right arrow
Authors (2):
Adnan Masood
Adnan Masood
author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

Heather Dawe
Heather Dawe
author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe

View More author details
Right arrow

Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox

“Research on bias, fairness, transparency, and the myriad dimensions of safety now forms a substantial portion of all of the work presented at major AI and machine-learning conferences.”

– Aileen Nielsen, Practical Fairness: Achieving Fair and Secure Data Models

“If and when computer programs attain superhuman intelligence and unprecedented power, should we begin valuing these programs more than we value humans? ... Do humans have some magical spark, in addition to higher intelligence and greater power, which distinguishes them from pigs, chickens, chimpanzees, and computer programs alike? If yes, where did that spark come from, and why are we certain that an AI could never acquire it? If there is no such spark, would there be any reason to continue assigning special value to human life even after computers surpass humans in intelligence and power?”

&...

Fairness metrics

Fairness metrics are critical tools for ensuring that machine learning models are fair and unbiased. These measures allow for the evaluation of classification models and provide insights into whether certain groups are being unfairly favored or discriminated against. Demographic parity and equalized odds are two of the most widely used fairness metrics, both with their own unique approach to measuring fairness. By using these metrics, organizations can better understand how their models perform and take steps to address any biases that may exist.

Demographic parity

Demographic parity is a fairness metric that compares the predictions made between different groups, ignoring the actual true values. This metric is useful in cases where the input data is known to contain biases and the goal is to measure fairness. However, it is important to note that demographic parity only uses the predicted values and discards the information about the true values. It also uses...

Bias and disparity mitigation with Fairlearn

Fairlearn provides several ways to perform bias and disparity mitigation for real-world problems:

  • Post-processing methods: This involves adjusting the predictions made by a machine learning model after it has been trained, to reduce bias and disparity. An example of this is the reject option classifier, which allows you to set a threshold for the prediction scores for certain sensitive features. If the threshold is exceeded, the classifier will reject the prediction and instead return a default label.
  • Pre-processing methods: This involves transforming the data before training the machine learning model, to reduce bias and disparity. An example of this is CorrelationRemover, which adjusts the non-sensitive features to remove their correlation with the sensitive features, while retaining as much information as possible.
  • In-processing methods: This involves modifying the training process of the machine learning model, to reduce...

The Responsible AI Toolbox

The Responsible AI Toolbox9 provides a range of tools and user interfaces to help developers and stakeholders of AI systems to better understand and monitor AI systems. The concept of responsible AI refers to a method of creating, evaluating, and using AI systems in a safe, ethical, and trustworthy way, making informed decisions, and taking responsible actions.

The toolbox includes four visualization widgets to analyze and make decisions about AI models:

  • The Responsible AI dashboard brings together various tools from the toolbox to provide a comprehensive view of responsible AI assessment and debugging. With this dashboard, you can identify model errors, understand why they happen, and take steps to address them. Additionally, the causal decision-making capabilities offer valuable insights to stakeholders and customers.
  • The Error Analysis dashboard helps identify model errors and identifies groups of data where a model performs poorly.
  • ...

Summary

To summarize, the integration of Fairlearn and the Responsible AI Toolbox provides a comprehensive solution for responsible AI development and deployment, both within Azure as well as open source development. The dashboard brings together the power of several mature Responsible AI tools and libraries, providing a single pane of glass for conducting a holistic responsible assessment, debugging models, and making informed business decisions. With the Error Analysis dashboard, it is possible to identify model errors and discover cohorts of data for which the model underperforms.

The Fairness Assessment dashboard helps identify groups of people that may be disproportionately negatively impacted by an AI system. The Model Interpretability dashboard, powered by InterpretML, explains black-box models and helps users understand their global behavior and the reasons behind individual predictions.

Counterfactual Analysis and Causal Analysis provide actionable insights for data...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Responsible AI in the Enterprise
Published in: Jul 2023Publisher: PacktISBN-13: 9781803230528
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Adnan Masood

Adnan Masood, PhD is an artificial intelligence and machine learning researcher, visiting scholar at Stanford AI Lab, software engineer, Microsoft MVP (Most Valuable Professional), and Microsoft's regional director for artificial intelligence. As chief architect of AI and machine learning at UST Global, he collaborates with Stanford AI Lab and MIT CSAIL, and leads a team of data scientists and engineers building artificial intelligence solutions to produce business value and insights that affect a range of businesses, products, and initiatives.
Read more about Adnan Masood

author image
Heather Dawe

Heather Dawe, MSc. is a renowned data and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career, highlights include developing the first data science team in the UK public sector and leading on the development of early machine learning and AI assurance processes for the National Health Service (NHS) in England. Heather currently works with large UK Enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM Ambassador and multidisciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.
Read more about Heather Dawe