Locating lesions is important in the computer-aided diagnosis of X-ray images. However, box-level annotation is time-consuming and laborious. How to locate lesions accurately with few, or even without careful annotations is an urgent problem. Although several works have approached this problem with weakly-supervised methods, the performance needs to be improved. One obstacle is that general weakly-supervised methods have failed to consider the characteristics of X-ray images, such as the highly-structural attribute. We therefore propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection by imitating doctor's training and decision-making process. CCG models the intra-image relationship between different anatomical areas by leveraging the structural information to simulate the doctor's habit of observing different areas. Meanwhile, the relationship between any pair of images is modeled by a knowledge-reasoning module to simulate the doctor's habit of comparing multiple images. We integrate intra-image and inter-image information into a unified end-to-end framework. Experimental results on the NIH Chest-14 database (112,120 frontal-view X-ray images with 14 diseases) demonstrate that the proposed method achieves state-of-the-art performance in weakly-supervised localization of lesions by absorbing professional knowledge in the medical field.
翻译:在对X射线图像进行计算机辅助诊断时,发现损伤很重要。然而,箱层的批注既费时又费力。如何准确定位损伤,少少而少,甚至没有仔细的批注,是一个紧迫的问题。虽然一些作品以监督不力的方法处理这一问题,但性能需要改进。一个障碍是,普遍监督不力的方法没有考虑到X射线图像的特征,如高度结构特征等。因此,我们提议采用跨切片图,通过模仿医生的培训和决策过程,改进自动测出损伤的性能。CCG模型通过利用结构信息模拟医生观察不同领域的习惯,模拟不同解剖领域的内部图像关系。与此同时,任何一对图像之间的关系都以模拟医生比较多图像的习惯的知识论证模块为模型。我们建议将内部图像和图像间信息纳入一个统一的端对端框架,通过模仿医生的培训和决策程序,改进不同解剖领域之间的内部图像关系。CCCG模型模型模拟了不同解剖领域之间的内部图像关系,利用结构化信息模拟医生对不同领域的观察习惯观察;X系统前端数据库中的拟议14级图像的实验结果。