We introduce a novel Deep Learning framework, which quantitatively estimates image segmentation quality without the need for human inspection or labeling. We refer to this method as a Quality Assurance Network -- QANet. Specifically, given an image and a `proposed' corresponding segmentation, obtained by any method including manual annotation, the QANet solves a regression problem in order to estimate a predefined quality measure with respect to the unknown ground truth. The QANet is by no means yet another segmentation method. Instead, it performs a multi-level, multi-feature comparison of an image-segmentation pair based on a unique network architecture, called the RibCage. To demonstrate the strength of the QANet, we addressed the evaluation of instance segmentation using two different datasets from different domains, namely, high throughput live cell microscopy images from the Cell Segmentation Benchmark and natural images of plants from the Leaf Segmentation Challenge. While synthesized segmentations were used to train the QANet, it was tested on segmentations obtained by publicly available methods that participated in the different challenges. We show that the QANet accurately estimates the scores of the evaluated segmentations with respect to the hidden ground truth, as published by the challenges' organizers. The code is available at: TBD.
翻译:我们引入了一个新的深层学习框架, 在数量上估计图像分解质量, 而不需要人类检查或标签。 我们将此方法称为质量保证网络 -- -- QANet。 具体地说, 以包括人工注释在内的任何方法获得的图像和“拟议”对应分解, QANet解决了一个回归问题, 以便估计与未知地面真相有关的预定义质量尺度。 QANet绝非是另一种分解方法。 相反, QANet对基于独特网络结构的图像分解配对进行多层次、多功能比较, 称为 RibCage。 为了展示QANet的力量, 我们用来自不同领域的两个不同的数据集, 即细胞分解基准的高量活细胞显微镜和Leaf分解挑战中植物的自然图像, 使用合成分解方法来培训QANet, 却对参与不同挑战的公开方法获得的分解方法进行了多层次、多功能的比较。 为了展示QANet的强度, 我们用来自不同领域的两种不同的数据集来评估实例分解。