In 1936, Alan Turing envisioned a “human calculator… operating on a child’s notebook,” his Logical Computing Machine: reading/writing 0/1, moving left/right. Together with Church’s Lambda calculus (1932), these “term rewriting systems” still form the logical foundation of computer science and, for far too long, a paradigm for human cognition and AI. Turing introduced them as a possible “imitation” of a human brain. The “connectionist turn,” on the other hand, is based on a “model” of the brain, starting from Hebb and Rosenblatt (in the 1950s) and has paved the way for contemporary Deep Learning. In both cases, an input-output machine is supposed to simulate an animal brain, without three-dimensional space (or just an imitation of it through a cascade of two-dimensional layers), nor the biological materiality of the brain in its context (an animal skull, in a body, within an ecosystem). Some limiting (mathematical) results of Deep Learning will be discussed as well as the differences between unpredictability, dynamics, and creativity, as an instance of “anti-entropy production,” a concept proposed in 2009. In mental processes, the production of anti-entropy can be understood as “the invention of meaningful configurations”.
G. Longo, Le cauchemar de Prométhée. Les sciences et leurs limites. Préface de Jean Lassègue, postface d’Alain Supiot. PUF, Paris, 2023. – Couverture-Table-
G. Longo, Information at the Threshold of Interpretation, Science as Human Construction of Sense. In Bertolaso, M., Sterpetti, F. (Eds.) A Critical Reflection on Automated Science – Will Science Remain Human? pp. 67-100, Springer, Dordrecht, 2019.
C. Calude, G. Longo. The Deluge of Spurious Correlations in Big Data, in Foundations of Science, 1-18, March, 2016.
(téléchargeable à l’adresse suivante: https://www.di.ens.fr/users/