Accurate identification and localization of abnormalities from radiology images serve as a critical role in computer-aided diagnosis (CAD) systems. Building a highly generalizable system usually requires a large amount of data with high-quality annotations, including disease-specific global and localization information. However, in medical images, only a limited number of high-quality images and annotations are available due to annotation expenses. In this paper, we explore this problem by presenting a novel approach for disease generation in X-rays using a conditional generative adversarial learning. Specifically, given a chest X-ray image from a source domain, we generate a corresponding radiology image in a target domain while preserving the identity of the patient. We then use the generated X-ray image in the target domain to augment our training to improve the detection performance. We also present a unified framework that simultaneously performs disease generation and localization.We evaluate the proposed approach on the X-ray image dataset provided by the Radiological Society of North America (RSNA), surpassing the state-of-the-art baseline detection algorithms.
翻译:在计算机辅助诊断系统(CAD)中,准确识别和定位放射图像的异常现象至关重要。 建立高度普及的系统通常需要大量具有高质量说明的数据,包括针对疾病的全球和地方化信息。然而,在医疗图像中,由于批注费用,只有数量有限的高质量图像和说明可用。在本文件中,我们通过使用有条件的基因对抗学习,为X射线疾病生成提供新颖方法,来探讨这一问题。具体而言,鉴于来自源域的胸X射线图像,我们在目标域生成相应的X射线图像,同时保存病人的身份。我们随后利用目标域内生成的X射线图像,以加强我们的检测性能培训。我们还提出了一个同时进行疾病生成和本地化的统一框架。我们评估了北美辐射学会提供的X射线图像数据集的拟议方法,超过了最先进的基线检测算法。