A vexing problem in artificial intelligence is reasoning about events that occur in complex, changing visual stimuli such as in video analysis or game play. Inspired by a rich tradition of visual reasoning and memory in cognitive psychology and neuroscience, we developed an artificial, configurable visual question and answer dataset (COG) to parallel experiments in humans and animals. COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory -- problems that remain challenging for modern deep learning architectures. We additionally propose a deep learning architecture that performs competitively on other diagnostic VQA datasets (i.e. CLEVR) as well as easy settings of the COG dataset. However, several settings of COG result in datasets that are progressively more challenging to learn. After training, the network can zero-shot generalize to many new tasks. Preliminary analyses of the network architectures trained on COG demonstrate that the network accomplishes the task in a manner interpretable to humans.
翻译:人工智能的一个棘手问题是,对在复杂、不断变化的视觉刺激(如视频分析或游戏游戏)中发生的事件进行推理。在认知心理学和神经科学中视觉推理和记忆的丰富传统激励下,我们开发了一个人工的、可配置的视觉问答数据集(COG),以进行人类和动物的平行实验。COG比一般的视频分析问题简单得多,但它解决了许多与视觉和逻辑推理及记忆有关的问题 -- -- 这些问题对现代深层次学习结构来说仍然具有挑战性。我们还提出一个深层次的学习结构,在其他诊断性VQA数据集(即CLEVR)上以竞争方式运行,以及COG数据集的简单设置。然而,COG的一些设置导致数据集逐渐具有学习的难度。经过培训后,网络可以零光概括到许多新任务。对在COG上培训过的网络结构的初步分析表明,网络能够以人类可以理解的方式完成这项任务。