Visual Question Answering (VQA) models take an image and a natural-language question as input and infer the answer to the question. Recently, VQA systems in medical imaging have gained popularity thanks to potential advantages such as patient engagement and second opinions for clinicians. While most research efforts have been focused on improving architectures and overcoming data-related limitations, answer consistency has been overlooked even though it plays a critical role in establishing trustworthy models. In this work, we propose a novel loss function and corresponding training procedure that allows the inclusion of relations between questions into the training process. Specifically, we consider the case where implications between perception and reasoning questions are known a-priori. To show the benefits of our approach, we evaluate it on the clinically relevant task of Diabetic Macular Edema (DME) staging from fundus imaging. Our experiments show that our method outperforms state-of-the-art baselines, not only by improving model consistency, but also in terms of overall model accuracy. Our code and data are available at https://github.com/sergiotasconmorales/consistency_vqa.
翻译:视觉问题解答(VQA)模式以图像和自然语言问题作为投入,并推断出问题的答案。最近,医学成像中的VQA系统由于病人参与和临床医生的第二点意见等潜在优势而变得受欢迎。虽然大多数研究工作的重点是改进结构并克服数据相关限制,但答案的一致性却被忽视了,尽管它在建立可信赖模型方面发挥着关键作用。在这项工作中,我们提议了一个新的损失功能和相应的培训程序,允许将问题之间的关系纳入培训过程。具体地说,我们考虑了认识和推理问题之间具有何种影响的案例。为了展示我们的方法的好处,我们评估了它与临床相关的、从Fundus成像中作用的“糖尿病”任务。我们的实验表明,我们的方法不仅通过改进模型的一致性,而且从总体模型准确性来看,都超越了“最新”基线。我们的代码和数据可在https://github.com/sergiotasconorales/conisticent_vqa.