In the current development and deployment of many artificial intelligence (AI) systems in healthcare, algorithm fairness is a challenging problem in delivering equitable care. Recent evaluation of AI models stratified across race sub-populations have revealed enormous inequalities in how patients are diagnosed, given treatments, and billed for healthcare costs. In this perspective article, we summarize the intersectional field of fairness in machine learning through the context of current issues in healthcare, outline how algorithmic biases (e.g. - image acquisition, genetic variation, intra-observer labeling variability) arise in current clinical workflows and their resulting healthcare disparities. Lastly, we also review emerging strategies for mitigating bias via decentralized learning, disentanglement, and model explainability.
翻译:在目前许多人工智能(AI)医疗体系的开发和部署中,算法公平性是提供公平护理的一个棘手问题。最近对跨种族分人口分类的AI模型的评估揭示了在诊断病人、治疗和医疗费用账单方面的巨大不平等。从这一角度出发,我们总结了通过当前保健问题在机器学习中的交叉公平领域,概述了在目前的临床工作流程中如何出现算法偏差(如:形象获取、遗传变异、观察者内部标签的变异性)及其造成的保健差异。最后,我们还审查了通过分散学习、分解和模型解释减少偏见的新战略。