International Classification of Diseases (ICD) coding plays an important role in systematically classifying morbidity and mortality data. In this study, we propose a hierarchical label-wise attention Transformer model (HiLAT) for the explainable prediction of ICD codes from clinical documents. HiLAT firstly fine-tunes a pretrained Transformer model to represent the tokens of clinical documents. We subsequently employ a two-level hierarchical label-wise attention mechanism that creates label-specific document representations. These representations are in turn used by a feed-forward neural network to predict whether a specific ICD code is assigned to the input clinical document of interest. We evaluate HiLAT using hospital discharge summaries and their corresponding ICD-9 codes from the MIMIC-III database. To investigate the performance of different types of Transformer models, we develop ClinicalplusXLNet, which conducts continual pretraining from XLNet-Base using all the MIMIC-III clinical notes. The experiment results show that the F1 scores of the HiLAT+ClinicalplusXLNet outperform the previous state-of-the-art models for the top-50 most frequent ICD-9 codes from MIMIC-III. Visualisations of attention weights present a potential explainability tool for checking the face validity of ICD code predictions.
翻译:国际疾病分类(ICD)编码在系统地分类发病率和死亡率数据方面发挥了重要作用。在本研究中,我们建议使用一个等级标签式的标签式注意变压器模型(HILAT),用于从临床文件对ICD代码作出可解释的预测。HILAT首先微调一个经过预先训练的变压器模型,以代表临床文件的象征物。我们随后使用一个两级等级式标签式注意机制,以建立标签特定文件的表述。这些表述被一个反馈式神经网络用来预测输入的临床文件是否指定了特定的ICD代码。我们用医院排放摘要及其与MIMIMI-III数据库对应的ICD-9代码来评估HILAT。为了调查不同类型变压器模型的性能,我们开发了一个临床加XLNet模型,该模型利用所有MIMIC-III临床说明,从XLNet基地持续进行预培训。实验结果显示,HILAT+ClicaplusxNet的F1分数超过了前一至50年最频繁的ICD状态模型,用来解释当前ICD-9号重要预测工具的I-MICI-IMIII的可靠性。