What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata

APA

(2022). What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata. The Simons Institute for the Theory of Computing. https://old.simons.berkeley.edu/talks/what-really-matters-fairness-machine-learning-delayed-impact-and-other-desiderata

MLA

What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata. The Simons Institute for the Theory of Computing, Nov. 09, 2022, https://old.simons.berkeley.edu/talks/what-really-matters-fairness-machine-learning-delayed-impact-and-other-desiderata

BibTex

          @misc{ scivideos_22931,
            doi = {},
            url = {https://old.simons.berkeley.edu/talks/what-really-matters-fairness-machine-learning-delayed-impact-and-other-desiderata},
            author = {},
            keywords = {},
            language = {en},
            title = {What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {nov},
            note = {22931 see, \url{https://scivideos.org/simons-institute/22931}}
          }
          
Lydia Liu (Cornell University)
Source Repository Simons Institute

Abstract

From education to lending, consequential decisions in society increasingly rely on data-driven algorithms. Yet the long-term impact of algorithmic decision making is largely ill-understood, and there exist serious challenges to ensuring equitable benefits, in theory and practice. While the subject of algorithmic fairness has received much attention, algorithmic fairness criteria have significant limitations as tools for promoting equitable benefits. In this talk, we review various fairness desiderata in machine learning and when they may be in conflict. We then introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. We end by considering future directions for fairness in machine learning that evince a holistic and interdisciplinary approach.