Equivariance w.r.t. geometric transformations in neural networks improves data efficiency, parameter efficiency and robustness to out-of-domain perspective shifts. When equivariance is not designed into a neural network, the network can still learn equivariant functions from the data. We quantify this learned equivariance, by proposing an improved measure for equivariance. We find evidence for a correlation between learned translation equivariance and validation accuracy on ImageNet. We therefore investigate what can increase the learned equivariance in neural networks, and find that data augmentation, reduced model capacity and inductive bias in the form of convolutions induce higher learned equivariance in neural networks.
翻译:深度图像识别模型中学习到的等变性受到什么影响?
翻译后的摘要:
神经网络在几何变换下的等变性提高了数据效率、参数效率和对视角变换的稳健性。当等变性未被设计到神经网络中时,网络仍然可以从数据中学习到等变函数。我们提出了一个改进的等变性度量来量化这种学习到的等变性。我们发现学习到的平移等变性与ImageNet上的验证准确性之间存在相关性。因此,我们研究了什么可以增加神经网络中学习到的等变性,并发现数据增强、降低模型容量以及以卷积形式的归纳偏差可以在神经网络中诱导更高的学习到的等变性。