Clinical notes are becoming an increasingly important data source for machine learning (ML) applications in healthcare. Prior research has shown that deploying ML models can perpetuate existing biases against racial minorities, as bias can be implicitly embedded in data. In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes. Our work makes three key contributions. First, we find that models can identify patient self-reported race from clinical notes even when the notes are stripped of explicit indicators of race. Second, we determine that human experts are not able to accurately predict patient race from the same redacted clinical notes. Finally, we demonstrate the potential harm of this implicit information in a simulation study, and show that models trained on these race-redacted clinical notes can still perpetuate existing biases in clinical treatment decisions.
翻译:临床笔记正在成为机器学习(ML)医疗应用方面日益重要的数据来源。 先前的研究显示,部署ML模型可能使现有的对种族少数的偏见永久化,因为偏见可能隐含在数据中。 在本研究中,我们调查了ML模型和人类专家可获得的隐含种族信息的水平以及临床笔记中可检测到的模型差异的影响。我们的工作作出了三个主要贡献。首先,我们发现模型可以从临床笔记中确定病人自我报告的种族,即使笔记中没有明确的种族指标。第二,我们确定人类专家无法准确预测同一重编的临床笔记中的病人种族。最后,我们展示了模拟研究中这种隐含信息的潜在危害,并表明这些经过培训的种族重复的临床笔记模型仍然能够维持临床治疗决定中现有的偏见。