/home/lecreumo/public html/wp content/uploads/2021/10/capture décran le 2021 10 26 à 14.49.31 1

« AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind »

L’article « AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind« , de Jocelyn Maclure publié dans Minds & Machines est maintenant offert en libre accès.

Résumé

Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived from the demands of Rawlsian public reason. In the second part of the paper, I try to show that the argument from the limitations of human cognition fails to get AI off the hook of public reason. Against a growing trend in AI ethics, my main argument is that the analogy between human minds and artificial neural networks fails because it suffers from an atomistic bias which makes it blind to the social and institutional dimension of human reasoning processes. I suggest that developing interpretive AI algorithms is not the only possible answer to the explainability problem; social and institutional answers are also available and in many cases more trustworthy than techno-scientific ones.