/home/lecreumo/public html/wp content/uploads/2020/02/capture décran le 2020 03 01 à 13.13.54

Derek Leben (U. of Pittsburgh at Johnstown)

Quand :
12 mars 2020 @ 16:00 – 18:00
Où :
Agora-Café, Mila / IVADO
6650 St-Urbain (Ground Floor)

Présentation de Derek Leben.

Discutant: Martin Gibert (CRÉ-IVADO). Organisation: Dominic Martin (UQÀM).

Affiche (.pdf)

Moral Principles for Evaluating Fairness Metrics in AI

Machine learning (ML) algorithms are increasingly being used in both the public and private sectors to make decisions about jobs, loans, college admissions, and prison sentences. The appeal of ML algorithms is clear; they can vastly increase the efficiency, accuracy, and consistency of decisions. However, because the training data for ML algorithms contains discrepancies caused by historical injustices, these algorithms often reveal biases towards historically oppressed groups. The field of « Fairness, Accountability, and Transparency in Machine Learning » (FAT ML) has developed several metrics for determining when such bias exists, but satisfying fairness in all of these metrics is mathematically impossible, and some of them require large sacrifices to the accuracy of ML algorithms. I propose that we can make progress on evaluating fairness metrics by drawing on traditional principles from moral and political philosophy. These principles are largely designed around the problem of determining a fair distribution of resources, such as Egalitarianism, Libertarianism, Desert-Based Approaches, Intention-Based Approaches, and Consequentialism. My goal is to describe in detail how each of these approaches will favor a particular set of fairness metrics for evaluating ML algorithms.

Derek Leben is Department Chair and Associate Professor of Philosophy at the University of Pittsburgh at Johnstown. His research focuses on the intersection between ethics, cognitive science, and emerging technologies. In his recent book, Ethics for Robots: how to design a moral algorithm (Routledge, 2018), he demonstrates how traditional moral principles can be formalized and implemented into autonomous systems. He is currently on sabbatical as a visiting professor at Carnegie Mellon University, working on extending this approach from the domain of harm in autonomous systems to the domain of fairness in machine learning algorithms.