In this paper, we study the problem of procedure planning in instructional videos. Here, an agent must produce a plausible sequence of actions that can transform the environment from a given start to a desired goal state. When learning procedure planning from instructional videos, most recent work leverages intermediate visual observations as supervision, which requires expensive annotation efforts to localize precisely all the instructional steps in training videos. In contrast, we remove the need for expensive temporal video annotations and propose a weakly supervised approach by learning from natural language instructions. Our model is based on a transformer equipped with a memory module, which maps the start and goal observations to a sequence of plausible actions. Furthermore, we augment our model with a probabilistic generative module to capture the uncertainty inherent to procedure planning, an aspect largely overlooked by previous work. We evaluate our model on three datasets and show our weaklysupervised approach outperforms previous fully supervised state-of-the-art models on multiple metrics.
翻译:在本文中,我们研究教学视频中的程序规划问题。 在这里, 一个代理必须产生一个合理的行动序列, 可以将环境从一个给定的起始点转变为一个理想的目标状态。 当从教学视频中学习程序规划时, 最近的工作将中间视觉观察作为监督手段, 这需要花费昂贵的注解努力, 才能精确地将培训视频中的所有教学步骤定位到本地。 相反, 我们不需要昂贵的时间视频说明, 通过学习自然语言指令来提出一种监督不力的方法。 我们的模型基于一个配置了记忆模块的变压器, 该变压器将起始点和目标观测映射为一系列合理的行动。 此外, 我们用一个概率化的基因化模块来扩展我们的模型, 捕捉程序规划所固有的不确定性, 以往的工作基本上忽略了这一方面。 我们用三个数据集来评估我们的模型, 并展示我们微弱监控的方法比以前完全监督的多度指标的状态模型要好。