One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal -- we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release: https://cs.stanford.edu/~kaichun/where2act/
翻译:视觉感知的基本目标之一是使代理商能够有意义地与环境互动。在本文中,我们朝着这一长期目标迈出了一步 -- -- 我们提取了与基本行动有关的高度局部化的可操作信息,例如推动或拉动可移动部件的直径物体。例如,一个抽屉,我们的网络预测在抽屉中将拉动力应用在抽屉上,打开抽屉。我们提议、讨论和评价具有图像和深度数据的新颖网络结构,预测每个像素可能采取的行动,以及每个像素可能采取行动的分解部分在部队下移动的区域。我们提议了一个在线数据取样战略,从相互之间学习,使我们能够对网络进行模拟(SAPIEN)和跨类别的培训。我们查看网站以发布代码和数据:https://cs.stanford.edu/~kaichun/where2act/。