Developing an AI-assisted gland segmentation method from histology images is critical for automatic cancer diagnosis and prognosis; however, the high cost of pixel-level annotations hinders its applications to broader diseases. Existing weakly-supervised semantic segmentation methods in computer vision achieve degenerative results for gland segmentation, since the characteristics and problems of glandular datasets are different from general object datasets. We observe that, unlike natural images, the key problem with histology images is the confusion of classes owning to morphological homogeneity and low color contrast among different tissues. To this end, we propose a novel method Online Easy Example Mining (OEEM) that encourages the network to focus on credible supervision signals rather than noisy signals, therefore mitigating the influence of inevitable false predictions in pseudo-masks. According to the characteristics of glandular datasets, we design a strong framework for gland segmentation. Our results exceed many fully-supervised methods and weakly-supervised methods for gland segmentation over 4.4% and 6.04% at mIoU, respectively. Code is available at https://github.com/xmed-lab/OEEM.
翻译:利用组织图象开发AI协助的腺分解方法对于自动癌症诊断和预测至关重要;然而,像素水平说明的高成本阻碍了其应用于更广泛的疾病。计算机视觉中现有的受微弱监督的语系分解方法在腺分解方面取得了退化结果,因为腺数据集的特性和问题不同于一般物体数据集。我们观察到,与自然图象不同的是,组织图象的关键问题是,拥有形态同质性和不同组织之间低色对比的班级的混乱。为此,我们提议一种新型的在线简单示例采矿法(OEEM),鼓励网络侧重于可靠的监督信号,而不是噪音信号,从而减轻伪物质中不可避免的虚假预测的影响。根据腺数据集的特性,我们设计了一个强大的腺分解框架。我们的结果超过了许多完全受监督的方法,以及 mIU/Omqub-U中较弱的地段分解方法。