Multitask deep learning has been applied to patient outcome prediction from text, taking clinical notes as input and training deep neural networks with a joint loss function of multiple tasks. However, the joint training scheme of multitask learning suffers from inter-task interference, and diagnosis prediction among the multiple tasks has the generalizability issue due to rare diseases or unseen diagnoses. To solve these challenges, we propose a hypernetwork-based approach that generates task-conditioned parameters and coefficients of multitask prediction heads to learn task-specific prediction and balance the multitask learning. We also incorporate semantic task information to improves the generalizability of our task-conditioned multitask model. Experiments on early and discharge notes extracted from the real-world MIMIC database show our method can achieve better performance on multitask patient outcome prediction than strong baselines in most cases. Besides, our method can effectively handle the scenario with limited information and improve zero-shot prediction on unseen diagnosis categories.
翻译:多任务深层学习应用到文本中的患者结果预测,将临床记录作为投入,并培训具有多重任务共同损失功能的深神经网络;然而,多任务学习的联合培训计划受到任务之间的干扰,而多重任务之间的诊断预测由于罕见疾病或不可见的诊断而具有普遍性问题。为了解决这些挑战,我们建议采用基于超网络的方法,产生任务附加参数和多任务预测负责人系数,以学习任务特定预测,平衡多任务学习。我们还纳入了语义任务任务信息,以提高我们任务附加的多任务模型的通用性。从真实世界MIMIMI数据库提取的早期和排放笔记实验表明,我们的方法在多任务病人结果预测方面比多数情况下的强基线能够取得更好的业绩。此外,我们的方法可以有效地利用有限的信息处理情景,并改进对未知诊断类别的零点预测。