Summary
Protecting sensitive data is a multi-faceted problem. There are ways and techniques to mitigate fairness and protect privacy work ethically and responsibly with AI, but the balance between prediction accuracy and data protection is very sensitive. If you add the complexity of choosing the right combination of techniques based on your data and algorithms, it can seem daunting.
In this chapter, we learned to identify different types of sensitive data and common techniques to remove or mask them. However, it is not always possible to completely eliminate them as they are useful for the model training process. In this case, there are several libraries available to help. We can use the SmartNoise SDK to introduce noise to our data and protect privacy, work with the Fairlearn SDK to mitigate fairness, and use the Responsible AI dashboard together with explainers to interpret our models. We ended this chapter by introducing the concept of FL and how to apply it using Azure Machine...