Decision support systems based on clinical notes have the potential to improve patient care by pointing doctors towards overseen risks. Predicting a patient's outcome is an essential part of such systems, for which the use of deep neural networks has shown promising results. However, the patterns learned by these networks are mostly opaque and previous work revealed flaws regarding the reproduction of unintended biases. We thus introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input. The framework helps to understand learned patterns and their influence on model decisions. In this work, we apply it to analyse the change in behavior with regard to the patient characteristics gender, age and ethnicity. Our evaluation of three current clinical NLP models demonstrates the concrete effects of these characteristics on the models' decisions. They show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
翻译:基于临床记录的决策支持系统有可能通过将医生指向受监督的风险来改善病人护理。预测病人的结果是这种系统的一个基本部分,对此,深层神经网络的使用已显示出令人乐观的结果。然而,这些网络所学的模式大多不透明,以前的工作揭示了意外偏差复制方面的缺陷。因此,我们引入了一个可扩展的测试框架,评估临床结果模型在投入变化方面的行为。这个框架有助于理解所学的模式及其对模型决定的影响。在这项工作中,我们应用它来分析病人在性别、年龄和种族特征方面的行为变化。我们对目前三个临床NLP模型的评估显示了这些特征对模型决定的具体影响。它们表明,即使对同一数据进行微调,模型行为也有很大差异,而且据称最优秀的模型并不总是在医学上最有说服力的模式。