Medical report generation task, which targets to produce long and coherent descriptions of medical images, has attracted growing research interests recently. Different from the general image captioning tasks, medical report generation is more challenging for data-driven neural models. This is mainly due to 1) the serious data bias and 2) the limited medical data. To alleviate the data bias and make best use of available data, we propose a Competence-based Multimodal Curriculum Learning framework (CMCL). Specifically, CMCL simulates the learning process of radiologists and optimizes the model in a step by step manner. Firstly, CMCL estimates the difficulty of each training instance and evaluates the competence of current model; Secondly, CMCL selects the most suitable batch of training instances considering current model competence. By iterating above two steps, CMCL can gradually improve the model's performance. The experiments on the public IU-Xray and MIMIC-CXR datasets show that CMCL can be incorporated into existing models to improve their performance.
翻译:医学报告生成任务旨在生成医学图像的长且连贯的描述,这个任务最近越来越受到关注。与一般的图像字幕化任务不同,医学报告生成对基于数据驱动的神经模型来说更具挑战性。这主要是由于1)严重数据偏差和2)有限的医学数据。为了减轻数据偏差并充分利用可用数据,我们提出了一种基于能力的多模态课程学习框架(CMCL)。具体来说,CMCL模拟了放射科医生的学习过程,并以步骤方式优化模型。首先,CMCL估计每个训练实例的难度并评估当前模型的能力;其次,CMCL选择最合适的训练实例批次,考虑当前模型的能力。通过迭代上述两个步骤,CMCL可以逐步提高模型的性能。在公共的IU-Xray和MIMIC-CXR数据集上的实验表明,CMCL可以融入现有的模型中以提高它们的性能。