The ability to use inductive reasoning to extract general rules from multiple observations is a vital indicator of intelligence. As humans, we use this ability to not only interpret the world around us, but also to predict the outcomes of the various interactions we experience. Generalising over multiple observations is a task that has historically presented difficulties for machines to grasp, especially when requiring computer vision. In this paper, we propose a model that can extract general rules from video demonstrations by simultaneously performing summarisation and translation. Our approach differs from prior works by framing the problem as a multi-sequence-to-sequence task, wherein summarisation is learnt by the model. This allows our model to utilise edge cases that would otherwise be suppressed or discarded by traditional summarisation techniques. Additionally, we show that our approach can handle noisy specifications without the need for additional filtering methods. We evaluate our model by synthesising programs from video demonstrations in the Vizdoom environment achieving state-of-the-art results with a relative increase of 11.75% program accuracy on prior works
翻译:利用感性推理从多重观测中提取一般规则的能力是一个重要的智慧指标。 作为人类,我们不仅利用这种能力来解释我们周围的世界,而且预测我们所经历的各种互动的结果。 概括多重观察是一项任务,在历史上给机器难于掌握,特别是在需要计算机视觉时,这一直是机器难于掌握的任务。 在本文中,我们提出了一个模型,可以通过同时进行总结和翻译,从视频演示中提取一般规则。我们的方法不同于先前的工作,我们把问题描述成一个多序列到序列的任务,由模型来学习总结。这使我们的模型能够利用本来会被传统合成技术压制或抛弃的边缘案例。此外,我们表明我们的方法可以处理噪音规格,而不需要额外的过滤方法。我们通过综合Vizdoom环境中的视频演示程序来评估我们的模型,从而实现最新结果,相对提高先前工程11.75%的精确度。