Convolutional Neural Networks (CNNs) have become the state of the art method for image classification in the last ten years. Despite the fact that they achieve superhuman classification accuracy on many popular datasets, they often perform much worse on more abstract image classification tasks. We will show that these difficult tasks are linked to relational concepts from the field of concept learning. We will review deep learning research that is linked to this area, even if it was not originally presented from this angle. Reviewing the current literature, we will argue that used datasets lead to an overestimate of system performance by providing data in a pre-attended form, by overestimating the true variability and complexity of the given tasks, and other shortcomings. We will hypothesise that iterative processing of the input, together with attentional shifts, will be needed to efficiently and reliably solve relational reasoning tasks with deep learning methods.
翻译:过去十年来,进化神经网络(CNNs)已成为图像分类的最先进方法。尽管它们在许多流行数据集中实现了超人分类精度,但它们往往在更抽象的图像分类任务中表现得更差。我们将表明,这些困难的任务与概念学习领域的关联概念有关,我们将审查与这一领域相联系的深层次学习研究,即使最初不是从这个角度提出的。在审查当前文献时,我们将争辩说,使用数据集通过以预选形式提供数据,通过高估既定任务和其他缺点的真实变异性和复杂性,导致对系统性能的过高估计。 我们将假设,需要反复处理投入,同时进行注意力转移,以便高效和可靠地以深层次学习方法解决关系推理任务。