An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has become a key desirable with the increasingly widespread deployment of this technology. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual. However, the addition of these technologies to patient-clinician interactions, as with any complex human interaction, has potential pitfalls. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the 18th century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.
翻译:随着这一技术的日益普及,围绕保健使用人工智能(AI)的适当伦理框架已成为一项关键需要。AI的进步有望提高个人一级结果预测的准确性。然而,将这些技术添加到病人-临床互动中,如同与复杂的人类互动一样,可能存在陷阱。虽然医生们总是必须仔细考虑其行动的伦理背景和影响,但围绕快速发展的技术进步的详细审议可能没有跟上。我们在保健互动中使用共同但关键的挑战,即披露坏消息(可能即将死亡),以说明Jeremy Bentham在18世纪开发的“Felicifical Calculs”的哲学框架在AI时代如何能及时获得准定量应用。我们表明如何利用这一伦理算法在七个相互排斥和详尽无遗的领域评估AI支持的行动在道义上是否合理。