Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to self-attention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameter-limited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.
翻译:自我关注的模型与ResNet-50等基线革命模型相比,最近显示在精确度-参数权衡取舍方面,自我关注模式与ResNet-50等基线革命模型相比,在精确度-参数权衡取舍方面取得了令人鼓舞的改进。在这项工作中,我们的目标是开发自我关注模型,这些模型不仅能够超越罐头基线模型,而且甚至能够超越高性能的动态模型。我们提议了两个自我关注扩展,以配合更有效地实施自我关注,提高这些模型的速度、记忆使用和准确性。我们利用这些改进来开发一个新的自我关注模型系列,即HaloNets,在图像网络分类基准参数有限的设置上达到最新水平。在初步转移学习实验中,我们发现HaloNet模型超越了大得多的模型,并且具有更好的判断性。关于更困难的任务,例如物体检测和实例分割,提高这些模型的速度、记忆使用率和准确性。我们利用这些改进来开发一个新的自我关注模型模型,即HaloNets,在图像网络分类的参数设置上达到最强的自我定位,展示了另一种简单的自我定位模型。