On reçoit Dominic Martin (UQÀM), qui est responsable de l’un des thèmes phares du CRÉ, soit l’éthique de l’intelligence artificielle. Dominic nous offrira une présentation intitulée « Search Ranking to Neural Networking and The Obligation of Account-Giving ».
Algorithmic accountability — that is, holding to account the people or organizations that design, or use, algorithms — is not a new issue. People started questioning the legitimacy of Google’s search-ranking algorithms, or credit ranking scores, two or three decades ago. On a broad construal, even the last-century debates on socialist calculations raises issues of algorithmic accountability. If the allocation of the means of production is managed centrally rather than being left to market mechanisms, governments are compelled to engage in extensive calculations to decide where to distribute each good in the economy. One may wonder if these calculations are possible, or even fair.
But recent developments in information technologies, the increasing usage of quantitative data, the so-called Big Data paradigm and major breakthrough in artificial intelligence have pushed the issue of algorithmic accountability to a new level. Cathy O’Neil claims that the mathematical models powering the data economy are nothing less than “Weapons of Math Destruction,” a play on words suggesting the potential destructive power algorithms. According to Frank Pasquale, we now live in a “Black Box Society,” wherein secret algorithms control money and information.
The aim of this paper is to clarify the extent to which we have an obligation of account-giving regarding the usage of algorithms in society. More precisely, what are the people or organizations that design or use algorithms be accountable about, and who among these people should be accountable? Obligations of account-giving do not apply similarly to every situation, individual or group.
In the first part of the paper, I will clarify the notion of algorithmic accountability, to show, among other things, that it captures important insights about what people value in their social arrangements. Beyond that, however, the notion also suffers from a form of conceptual expansionism, where the scope and meaning of accountability has been extended in a number of directions. Furthermore, while more accountability always seems generally desirable, I will posit that there must be a point where our obligation of account-giving reaches a limit. In the second part of the paper, I will introduce three of these limits, namely: the epistemic limit, the efficiency limit and non-proportional harms.
In part three of the paper, I will claim that governments — and private organizations to a lesser extent — should be held to account for the validity of the models that support the algorithms they use for automating decision processes. They should be clear about the role of humans in automated decision processes. They should also be held to account for the potential discriminatory impacts of these algorithms, and they should be accountable, although to a lesser extent, for the effects of these algorithms on social equality. In the fourth and last part of the paper, I will show that new technologies in artificial intelligence, and especially neural networks, raise additional issues of accountability, given the complexity of the internal functioning of these systems. I will then suggest a series of measures or tools to help the people that design or use these systems be more accountable.