Artificial intelligence (AI) continues to transform data analysis in many domains. Progress in each domain is driven by a growing body of annotated data, increased computational resources, and technological innovations. In medicine, the sensitivity of the data, the complexity of the tasks, the potentially high stakes, and a requirement of accountability give rise to a particular set of challenges. In this review, we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making. (1) Explainable AI aims to produce a human-interpretable justification for each output. Such models increase confidence if the results appear plausible and match the clinicians expectations. However, the absence of a plausible explanation does not imply an inaccurate model. Especially in highly non-linear, complex models that are tuned to maximize accuracy, such interpretable representations only reflect a small portion of the justification. (2) Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains. For example, a classification task based on images acquired on different acquisition hardware. (3) Federated learning enables learning large-scale models without exposing sensitive personal health information. Unlike centralized AI learning, where the centralized learning machine has access to the entire training data, the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates, not personal health data. This narrative review covers the basic concepts, highlights relevant corner-stone and state-of-the-art research in the field, and discusses perspectives.
翻译:人工智能(AI)继续在许多领域改变数据分析,每个领域的进展都由越来越多的附加说明的数据、增加的计算资源和技术创新驱动。在医学、数据的敏感性、任务的复杂性、潜在高风险以及问责要求方面,产生了一系列特殊的挑战。在本次审查中,我们侧重于三种关键的方法方法,这些方法可以解决AI驱动的医疗决策中的某些特殊挑战。(1) 解释性AI的目的是为每项产出提供一个人解释的理由。这些模型如果结果看似合理,并符合临床医生的期望,则增强信心。然而,缺乏可信的解释并不意味着一种不准确的模式。特别是在高度非线性、复杂、任务复杂、任务复杂、潜在风险以及问责要求等高度非线性、复杂模型中,这种可解释的表述仅反映其中一小部分理由。(2) 内容调整和传输学习使AI模型能够培训和应用于多个领域。例如,基于不同购置硬件获得的图像的分类任务,使得能够学习大型模型,而不会暴露敏感的个人健康信息。然而,与集中化的AI学习不同,这种中央化学习机器仅通过学习实地基本数据更新,而只通过学习实地数据,通过学习的反复分析系统更新,学习相关的数据。