/home/lecreumo/public html/wp content/uploads/2024/03/epsum

« A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare »

Le Journal Club de bioéthique reprend ses activités pour l’année 2024!

Sa prochaine discussion  aura lieu de 12h à 13h (heure de l’est), le lundi  25 Mars, au local 3014-5 de l’École de santé publique de l’Université de Montréal (ESPUM) (7101, avenue du Parc, 3e étage, Montréal (Québec)  H3N 1X9). Vous pourrez également y participer en distanciel via Zoom.

Les discussions du journal club sont ouvertes à toutes les personnes intéressées par la bioéthique et par le sujet de l’article-cible en vedette. Lors de notre prochaine discussion, nous nous pencherons sur l’article par Brian D. Earp et al. « A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable ».

Résumé

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient’s (former) autonomy since it draws on the ‘wrong’ kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently ‘fine-tuned’ on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient’s preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient’s own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.