A common approach to solving physical reasoning tasks is to train a value learner on example tasks. A limitation of such an approach is that it requires learning about object dynamics solely from reward values assigned to the final state of a rollout of the environment. This study aims to address this limitation by augmenting the reward value with self-supervised signals about object dynamics. Specifically, we train the model to characterize the similarity of two environment rollouts, jointly with predicting the outcome of the reasoning task. This similarity can be defined as a distance measure between the trajectory of objects in the two rollouts, or learned directly from pixels using a contrastive formulation. Empirically, we find that this approach leads to substantial performance improvements on the PHYRE benchmark for physical reasoning (Bakhtin et al., 2019), establishing a new state-of-the-art.
翻译:解决物理推理任务的一个共同办法是对学习者进行实例任务方面的价值观培训。这种方法的一个局限性是,它要求仅仅从分配到环境扩展最后状态的奖赏价值中学习物体动态。本研究旨在通过以自我监督的物体动态信号来提高奖励价值来应对这一局限性。具体地说,我们培训模型来描述两种环境推理任务结果的相似性,并共同预测推理任务的结果。这种相似性可以定义为两个推理的物体轨迹之间的距离测量,或者用对比性公式直接从像素中学习。我们很生动地发现,这一方法使PHYRE物理推理基准(Bakhtin等人,2019年)的绩效得到实质性改进,从而建立了一种新的艺术状态。