Recent advances in self-supervised learning integrate Masked Modeling and Siamese Networks into a single framework to fully reap the advantages of both the two techniques. However, the previous erase-based masking scheme in masked image modeling is more aligned with the patchifying mechanism of ViT, it is not originally designed for siamese networks of ConvNet. Existing approaches simply inherit the default loss design from previous siamese networks and ignore the information loss after employing masking operation in the frameworks. In this paper, we propose a filling-based masking strategy called MixMask to prevent information loss due to the randomly erased areas of an image in the vanilla masking method. We further introduce a flexible loss function design that takes into account semantic distance change between two different mixed views for adapting the integrated architecture and avoiding mismatches between transformed input and objective in Masked Siamese ConvNets (MSCN). The flexible loss distance is calculated according to the proposed mix-masking scheme. Extensive experiments are conducted on various datasets of CIFAR-100, Tiny-ImageNet, and ImageNet-1K. The results demonstrate that the proposed framework can achieve better accuracy on linear probing, semi-supervised, and supervised finetuning, which outperforms the state-of-the-art MSCN by a significant margin. We also show the superiority on the downstream tasks of object detection and segmentation. Our source code is available at https://github.com/LightnessOfBeing/MixMask.
翻译:在自我监督的学习方面最近的进展,将隐蔽模型和暹罗网络纳入一个单一框架,以充分利用这两种技术的优势。然而,以前蒙面图像模型中的抹去式遮罩方案更符合ViT的修补机制,它最初不是为ConvNet的Siamese网络设计的。现有办法只是继承了以前的Siamese网络的默认损失设计,在使用框架中的掩码操作后忽略了信息损失。在本文件中,我们提议了一个称为MixMask的填充式掩罩战略,以防止由于香草遮罩方法中图像随机擦除区域而造成信息损失。我们进一步引入了灵活的损耗损函数设计,其中考虑到调适综合结构的不同观点之间的语义距离变化,避免了Myames ConNet(MSCN)中转变的投入和目标之间的不匹配。灵活的损失距离是根据拟议的混合模型计算出来的。在CIFAR-100、Tin-CNageNet和图像网络-Megrational-rality上的各种数据集进行了广泛的实验,我们在Smaria-Smarb-ral-ladal supal-lad supdal 上还显示了我们的重要探测/Smarvial-ral-lad-lad-d-mad-mad-madal-dal-madal-madal-madal-madal-madal-madaldaldaldaldal 。提议的结果显示,我们显示了在Sal-madal-madal-dal-madal-dal-dal-dal-madal-dal-madal-madal-mad-d-mad-mad-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-madal-mad-mad-madal-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-mad-d-d-