Learning to represent free text is a core task in many clinical machine learning (ML) applications, as clinical text contains observations and plans not otherwise available for inference. State-of-the-art methods use large language models developed with immense computational resources and training data; however, applying these models is challenging because of the highly varying syntax and vocabulary in clinical free text. Structured information such as International Classification of Disease (ICD) codes often succinctly abstracts the most important facts of a clinical encounter and yields good performance, but is often not as available as clinical text in real-world scenarios. We propose a \textbf{multi-view learning framework} that jointly learns from codes and text to combine the availability and forward-looking nature of text and better performance of ICD codes. The learned text embeddings can be used as inputs to predictive algorithms independent of the ICD codes during inference. Our approach uses a Graph Neural Network (GNN) to process ICD codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation Analysis (DCCA) to enforce the two views to learn a similar representation of each patient. In experiments using planned surgical procedure text, our model outperforms BERT models fine-tuned to clinical data, and in experiments using diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a tiny fraction of its computational effort.
翻译:在许多临床机器学习(ML)应用中,学习自由文本是一项核心任务,因为临床文本包含着无法以其他方式提供的观察和计划,无法进行推断。 最先进的方法使用大量计算资源和培训数据开发的大型语言模型;然而,由于临床免费文本中的语法和词汇差异很大,应用这些模型具有挑战性。 国际疾病分类(ICD)代码等结构化信息常常简明扼要地摘述临床遭遇的最重要事实,并产生良好的绩效,但在现实世界情景中往往不如临床文本那样可用。 我们建议了一个“Textbf{多视图学习框架 ”,从代码和文本中联合学习,以结合文本的可用性和前瞻性以及ICD代码的更好性能。 学习过的文字嵌入可用作预测算方法的投入,而这种方法则使用“神经图”网络模型处理ICD代码,而B-LSTM则用于处理过程的文本。 我们用“深 Centrical Corrilalogal 分析”(DCC) 联合从代码的代码和图像分析中学习两种尝试,用“B-MISMLA”的精确的模型来学习。