We present a novel framework for explainable labeling and interpretation of medical images. Medical images require specialized professionals for interpretation, and are explained (typically) via elaborate textual reports. Different from prior methods that focus on medical report generation from images or vice-versa, we novelly generate congruent image--report pairs employing a cyclic-Generative Adversarial Network (cycleGAN); thereby, the generated report will adequately explain a medical image, while a report-generated image that effectively characterizes the text visually should (sufficiently) resemble the original. The aim of the work is to generate trustworthy and faithful explanations for the outputs of a model diagnosing chest x-ray images by pointing a human user to similar cases in support of a diagnostic decision. Apart from enabling transparent medical image labeling and interpretation, we achieve report and image-based labeling comparable to prior methods, including state-of-the-art performance in some cases as evidenced by experiments on the Indiana Chest X-ray dataset
翻译:医疗图像需要专业专业人员进行解释,并(通常)通过详细文本报告加以解释。 与以前注重医学报告生成的方法不同,我们采用循环-基因反向网络(CypeGAN)制作了一致的图像-报告配对;因此,所产生的报告将充分解释医学图像,而报告生成的、有效描述文本视觉特征的图像应当(足够)与原件相似。 工作的目的是通过指向人类用户类似案例来支持诊断性决定,从而得出可信和忠实的解释,我们除了能够提供透明的医学图像标签和解释外,还取得了与以往方法相似的报告和基于图像的标签,包括在印地安那切斯特X射线数据集的实验所证明的一些情况下,我们取得了与以往方法相似的报告和基于图像的标签,包括一些情况下的最新性表现。