Learning to segment images purely by relying on the image-text alignment from web data can lead to sub-optimal performance due to noise in the data. The noise comes from the samples where the associated text does not correlate with the image's visual content. Instead of purely relying on the alignment from the noisy data, this paper proposes a novel loss function termed SimCon, which accounts for intra-modal similarities to determine the appropriate set of positive samples to align. Further, using multiple views of the image (created synthetically) for training and combining the SimCon loss with it makes the training more robust. This version of the loss is termed MV-SimCon. The empirical results demonstrate that using the proposed loss function leads to consistent improvements on zero-shot, text supervised semantic segmentation and outperforms state-of-the-art by $+3.0\%$, $+3.3\%$ and $+6.9\%$ on PASCAL VOC, PASCAL Context and MSCOCO, respectively. With test time augmentations, we set a new record by improving these results further to $58.7\%$, $26.6\%$, and $33.3\%$ on PASCAL VOC, PASCAL Context, and MSCOCO, respectively. In addition, using the proposed loss function leads to robust training and faster convergence.
翻译:仅依靠网络数据中的图像文本校正,即可学习部分图像,但仅依靠网络数据中的图像文本校正,可以导致由于数据中的噪音而产生亚最佳性能。噪音来自相关文本与图像的视觉内容无关的样本。本文不完全依靠音响数据的校正,而提议了一个新的损失函数SimCon,它考虑到各种模式内部的相似之处,以确定适当的正样组,以便加以校正。此外,在培训中使用对图像(合成生成的)的多重观点,并将SimCon损失与它合并,使培训更加有力。这种损失的版本称为MV-SimCon。经验结果显示,使用拟议的损失函数导致在零发、文本监督的语义分解和超常规状态方面不断改进,其价值为3.0美元、3.3美元和6.9美元,分别用于PASAL VOC、PASAL背景和MOCA-CA-CASA-CA-CA-CA、快速整合、快速整合、快速和快速升级的PASA-L-CA-CA-CA-CA-CA-CA-CA-CAL CO-CA-CA-CA-CASAL-CA-CA-CAR-L-L-CAR-L-CON-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-C-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L-L