Modern artificial neural networks, including convolutional neural networks and vision transformers, have mastered several computer vision tasks, including object recognition. However, there are many significant differences between the behavior and robustness of these systems and of the human visual system. Deep neural networks remain brittle and susceptible to many changes in the image that do not cause humans to misclassify images. Part of this different behavior may be explained by the type of features humans and deep neural networks use in vision tasks. Humans tend to classify objects according to their shape while deep neural networks seem to rely mostly on texture. Exploring this question is relevant, since it may lead to better performing neural network architectures and to a better understanding of the workings of the vision system of primates. In this work, we advance the state of the art in our understanding of this phenomenon, by extending previous analyses to a much larger set of deep neural network architectures. We found that the performance of models in image classification tasks is highly correlated with their shape bias measured at the output and penultimate layer. Furthermore, our results showed that the number of neurons that represent shape and texture are strongly anti-correlated, thus providing evidence that there is competition between these two types of features. Finally, we observed that while in general there is a correlation between performance and shape bias, there are significant variations between architecture families.
翻译:现代人造神经网络,包括进化神经网络和视觉变异器等现代人造神经网络,已经掌握了数项计算机视觉任务,包括物体识别。然而,这些系统与人类视觉系统的行为和稳健性之间存在许多重大差异。深神经网络仍然萎缩,容易受到图像的许多变化的影响,不会导致人类对图像进行错误分类。这种不同行为的一部分可能由在视觉任务中使用的人类特征类型和深层神经网络来解释。人类往往按其形状对物体进行分类,而深层神经网络似乎主要依赖质谱。探讨这一问题是相关的,因为它可以导致更好地运行神经网络结构,并使人们更好地了解灵长类动物的视觉系统的运作。在这项工作中,我们通过将先前的分析扩展至更大型的深层神经网络结构结构类型和深层神经网络网络结构。我们发现,图像分类任务中的模型性能与其形状的偏差关系非常密切,而在产出和下层中测量的神经网络网络网络网络似乎主要依靠纹理。此外,我们的结果表明,神经系统的数量可能导致神经网络结构更好地运行神经网络结构的功能,从而呈现出一种显著的形状和质系的形态之间的对比。我们观察到了两种结构。