Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law
Trustworthy AI is becoming ever more important in both the machine learning and legal domains. One important consequence is that decision makers must seek to guarantee a `fair’, i.e., non-discriminatory, algorithmic decision procedure. However, there are several competing notions of algorithmic fairness that have been shown to be mutually incompatible under realistic factual assumptions. This concerns, for example, the widely used fairness measures of ‘calibration within groups’ and ‘balance for the positive/negative class’. Indeed, the COMPAS algorithm, which predicts recidivism risk of criminal offenders, exhibits racial bias according to the balance metrics, but not regarding calibration.
In this paper, we present a novel algorithm (FAIM) for continuously interpolating between these three fairness criteria. Thus, an initially unfair prediction can be remedied to at least partially meet a desired, weighted combination of the respective fairness conditions. The algorithm relies on methods from the mathematical theory of optimal transport.
We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS dataset, and a new, real-world dataset from the e-commerce sector. Finally, we discuss to what extent FAIM can be harnessed to comply with conflicting legal obligations. The analysis suggests that it may operationalize duties in traditional legal fields, such as credit scoring and criminal justice proceedings, but also for the latest AI regulation put forth in the EU, like the recently enacted Digital Markets Act.
The talk will take place in person on Friday, December 16, at 3 pm @ CSH Salon
If you would like to attend, please email to firstname.lastname@example.org.