Achieving Fairness for Free in Artificial Intelligence Systems via Bayesian Optimization
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
Type of Work
Department
Information Systems
Program
Information Systems
Citation of Original Publication
Rights
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Distribution Rights granted to UMBC by the author.
Abstract
As artificial intelligence (AI) has become increasingly embedded in high-stakes decision-making systems, from criminal justice to hiring and healthcare, ensuring fairness is critical. However, prevailing assumptions suggest that improving fairness inevitably compromises predictive performance. This thesis challenges that assumption by exploring whether fairness can be achieved “for free”, that is, without sacrificing accuracy, through the use of Bayesian Optimization (BO). We frame fairness-aware machine learning as a black-box constrained optimization problem, where the goal is to maximize fairness while satisfying a predefined accuracy threshold. We incorporate fairness metrics such as the p%-rule and use BO to efficiently search over hyperparameters, treating both the model architecture and fairness-accuracy trade-off parameter (λ) as part of the search space. The core of our method involves using Gaussian Processes as a surrogate model, along with acquisition functions like Expected Improvement (EI) and Upper Confidence Bound (UCB) to balance exploration and exploitation. Extensive experiments on benchmark datasets (e.g., COMPAS, Adult Census, and Bank) demonstrate that BO outperforms traditional grid search in both runtime efficiency and fairness-performance trade-off. In many cases, our BO-based method finds models that improve fairness without any measurable drop in accuracy, achieving the “fairness for free” phenomenon. This work contributes a practical and scalable approach to tuning fairness-sensitive models in black-box settings and lays the groundwork for further deployment of fair AI in real-world applications. By reframing fairness optimization as a sample-efficient search problem, we help bridge the gap between ethical AI principles and technical feasibility.
