Deep learning demonstrated major abilities in solving many kinds of different real-world problems in computer vision literature. However, they are still strained by simple reasoning tasks that humans consider easy to solve. In this work, we probe current state-of-the-art convolutional neural networks on a difficult set of tasks known as the same-different problems. All the problems require the same prerequisite to be solved correctly: understanding if two random shapes inside the same image are the same or not. With the experiments carried out in this work, we demonstrate that residual connections, and more generally the skip connections, seem to have only a marginal impact on the learning of the proposed problems. In particular, we experiment with DenseNets, and we examine the contribution of residual and recurrent connections in already tested architectures, ResNet-18, and CorNet-S respectively. Our experiments show that older feed-forward networks, AlexNet and VGG, are almost unable to learn the proposed problems, except in some specific scenarios. We show that recently introduced architectures can converge even in the cases where the important parts of their architecture are removed. We finally carry out some zero-shot generalization tests, and we discover that in these scenarios residual and recurrent connections can have a stronger impact on the overall test accuracy. On four difficult problems from the SVRT dataset, we can reach state-of-the-art results with respect to the previous approaches, obtaining super-human performances on three of the four problems.
翻译:深层次的学习表明,在解决计算机视觉文献中各种不同的现实世界问题方面,存在着巨大的能力。然而,这些能力仍然受到简单的推理任务的压力,而人类认为这些任务很容易解决。在这项工作中,我们探索目前最先进的进化神经网络对一系列困难的任务的贡献,这些困难的任务被称为相同的问题。所有问题都需要同样的先决条件才能正确解决:了解同一图像中两种随机形状是否相同。在这项工作中进行的实验表明,剩余连接,以及更一般而言的跳过连接,似乎对了解拟议中的问题影响不大。特别是,我们试验DenseNets,我们研究在已经测试过的建筑结构中,即ResNet-18和CorNet-S中,剩余和经常连接的作用。我们的实验表明,老化的饲料转发网络,AlexNet和VGG,除了某些具体情景之外,几乎无法了解所提出的问题。我们显示,最近引入的结构即使在其建筑的重要部分被删除的情况下,也似乎对学习不到什么影响。我们最后对DenseNets进行了一些零射线式的一般测试,我们发现,在S-V的连续四个情景中,我们发现,在S-roferalalalalizalizalizalation 中,我们发现,在S-staltiquest laveald lavequest laveald laveald dald laveald salds lavealds lads lads lads lads lads lads lads lads lads laveds laveds laveds laddds lads lads laddds lads lads lads lads lads a lads lads lads lads ladddddds laddddddddddddddddds lad ladddddddddddddds lads lads lads lads ladddddddd lads lads lads 可以得出出出出