Mitigating fairness
Mitigating fairness in ML models is an essential step to ensure that the model does not exhibit bias or discrimination against certain groups of individuals. Even though we can remove PII from our datasets, predictions might favor different groups based on characteristics such as race, gender, age, or religion. If the training data is not diverse and representative of the population you aim to serve, bias can creep into the model if the data does not adequately represent all groups.
Firstly, we need to learn to identify bias in our models. This is easy by conducting an analysis of the metrics of the model. Suppose you suspect that your load approval model favors people above a certain age to get their loan application approved. You can start by looking at the metrics for the complete dataset as follows:
Selection Rate |
Accuracy |
Recall |
...