Despite the impressive progress of self-supervised learning (SSL), its applicability to low-compute networks has received limited attention. Reported performance has trailed behind standard supervised pre-training by a large margin, barring self-supervised learning from making an impact on models that are deployed on device. Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks and opt to bypass the problem through the use of knowledge distillation (KD). In this work, we revisit SSL for efficient neural networks, taking a closer at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting. We find that, contrary to accepted knowledge, there is no intrinsic architectural bottleneck, we diagnose that the performance bottleneck is related to the model complexity vs regularization strength trade-off. In particular, we start by empirically observing that the use of local views can have a dramatic impact on the effectiveness of the SSL methods. This hints at view sampling being one of the performance bottlenecks for SSL on low-capacity networks. We hypothesize that the view sampling strategy for large neural networks, which requires matching views in very diverse spatial scales and contexts, is too demanding for low-capacity architectures. We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV, DINO), different low-size networks (e.g. MobileNetV2, ResNet18, ResNet34, ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning). Our best models establish a new state-of-the-art for SSL methods on low-compute networks despite not using a KD loss term.
翻译:尽管自我监督学习(SSL)取得了令人印象深刻的进展,但是它对于低计算网络的可适用性却受到了有限的关注。据报告,业绩已落后于以大利润率进行标准监督的网络前培训,因此无法对设备上部署的模型产生影响。大多数先前的作品将业绩不佳归咎于低计算网络的能力瓶颈,并选择通过使用知识蒸馏(KD)绕过问题。在这项工作中,我们重新审视了SSL,以高效神经网络为目的,更接近造成实际局限性的有害因素,以及它们是否与自我监督的网络前培训前的低配置相内在。我们发现,与公认的知识相反,没有内在的建筑瓶颈,我们判断性能瓶颈与模型的复杂性相关,而不是规范强度交易。我们首先从经验角度观察,使用当地观点可以对不同的神经网络产生巨大影响,这显示取样是SSLVT低数据网络的一个性能瓶颈,我们通过系统化的系统化的系统化系统化系统化模型来对比系统化系统化的系统化系统化模型,而我们又需要一个不断更新的系统化的系统化的系统化的系统化的系统化的系统化的系统化模型。