Despite the success of deep learning methods for semantic segmentation, few-shot semantic segmentation remains a challenging task due to the limited training data and the generalisation requirement for unseen classes. While recent progress has been particularly encouraging, we discover that existing methods tend to have poor performance in terms of meanIoU when query images contain other semantic classes besides the target class. To address this issue, we propose a novel self-supervised task that generates random pseudo-classes in the background of the query images, providing extra training data that would otherwise be unavailable when predicting individual target classes. To that end, we adopted superpixel segmentation for generating the pseudo-classes. With this extra supervision, we improved the meanIoU performance of the state-of-the-art method by 2.5% and 5.1% on the one-shot tasks, as well as 6.7% and 4.4% on the five-shot tasks, on the PASCAL-5i and COCO benchmarks, respectively.
翻译:尽管深入学习语义分解方法取得了成功,但由于培训数据有限和对隐性班级的一般要求,少发语义分解仍是一项艰巨的任务。虽然最近的进展特别令人鼓舞,但我们发现,在查询图像包含目标类之外的其他语义类时,现有方法在中值IoU方面表现不佳。为了解决这一问题,我们提议了一项新颖的自我监督任务,在查询图像背景中生成随机假类,提供在预测单个目标类时否则无法获得的额外培训数据。为此,我们采用了超像素分解来生成假类。通过这种额外的监督,我们改进了一手任务中最新方法的中值IoU性能2.5%和5.1%,以及PASCAL-5i和COCO基准中分别有6.7%和4.4%用于五手任务。