Minimax Fairness in Machine Learning

Loading...
Thumbnail Image

Date

2022

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

130
views
388
downloads

Abstract

The notion of fairness in machine learning has gained significant popularity in the last decades, in part due to the large number of decision-making models that are being deployed on real-world applications, which have presented unwanted behavior. In this work, we analyze fairness in machine learning from a multi-objective optimization perspective, where the goal is to learn a model that achieves a good performance across different groups or demographics. In particular, we analyze how to achieve models that are efficient in the Pareto sense, providing the best performance for the worst group (i.e., minimax solutions). We study how to achieve minimax Pareto fair solutions when sensitive groups are available at training time, and also when the demographics are completely unknown. We provide experimental results showing how the discussed techniques to achieve minimax Pareto fair solutions perform on classification tasks, and how they can be adapted to work on other applications such as backward compatibility and federated learning. Finally, we analyze the problem of achieving minimax solutions asymptotically when we optimize models that can perfectly fit their training data, such as deep neural networks trained with stochastic gradient descent.

Description

Provenance

Citation

Citation

Martinez Gil, Natalia Lucienne (2022). Minimax Fairness in Machine Learning. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/25303.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.