It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. A machine learning model may pick up undesirable correlations, for example, between a patient's racial identity and clinical outcome. Such correlations are often present in (historical) data used for model development. There has been an increase in studies reporting biases in disease detection models. Besides the scarcity of data from underserved populations, very little is known about how these biases are encoded and how one may reduce or even remove disparate performance. There are concerns that an algorithm may recognize patient characteristics such as biological sex or racial identity, and then directly or indirectly use this information when making predictions. But it remains unclear how we can establish whether such information is actually used. This article aims to shed some light on these issues by exploring methodology allowing intuitive inspections of the inner working of machine learning models for image-based detection of disease. We also investigate how to address performance disparities and find automatic threshold selection to be an effective yet questionable technique, resulting in models with comparable true and false positive rates across subgroups. Our findings call for further research to better understand the underlying causes of performance disparities.
翻译:人们正确地强调,将AI用于临床决策可能会扩大健康差异;机器学习模式可能会发现病人的种族特征和临床结果之间不可取的关联,例如,病人的种族认同和临床结果之间的关联。这种关联往往存在于用于模型开发的(历史)数据中。报告疾病检测模型偏见的研究有所增加。除了服务不足的人口缺乏数据,对于这些偏差是如何编码的以及如何减少或甚至消除差异性表现知之甚少。有人担心算法可能承认病人的特征,例如生物性别或种族特征,然后在作出预测时直接或间接地使用这种信息。但是,我们仍不清楚我们如何能够确定这种信息是否实际使用。这一文章的目的是通过探索允许对机器学习模型的内部运行进行直观检查的方法来说明这些问题。我们还调查如何解决绩效差异,并发现自动门槛选择是一种有效的、但有疑问的技术,从而导致各子群之间具有可比的真实和虚假的积极率的模式。我们的调查结果要求进一步研究如何更好地了解业绩差异的根本原因。