Humans can perceive scenes in 3D from a handful of 2D views. For AI agents, the ability to recognize a scene from any viewpoint given only a few images enables them to efficiently interact with the scene and its objects. In this work, we attempt to endow machines with this ability. We propose a model which takes as input a few RGB images of a new scene and recognizes the scene from novel viewpoints by segmenting it into semantic categories. All this without access to the RGB images from those views. We pair 2D scene recognition with an implicit 3D representation and learn from multi-view 2D annotations of hundreds of scenes without any 3D supervision beyond camera poses. We experiment on challenging datasets and demonstrate our model's ability to jointly capture semantics and geometry of novel scenes with diverse layouts, object types and shapes.
翻译:人类可以从少数的 2D 视图中看到 3D 的场景 。 对于 AI 代理来说, 能够从任何角度识别场景, 只给几个图像, 使他们能够有效地与场景及其对象互动。 在这项工作中, 我们试图用这种能力将机器投放。 我们提出一个模型, 将新场景的几张 RGB 图像作为输入, 并通过将其分为语义类别从新视角中识别场景 。 所有这一切都无法从这些视图中获取 RGB 图像 。 我们用隐含的 3D 表示来匹配 2D 场景的识别, 并从多视图 2D 中学习数百场景的图解, 而不在相机之外设置任何 3D 监督 。 我们尝试了挑战数据集, 并展示了我们的模型能够用不同布局、 对象类型和形状联合捕捉新场景的语义和几何形状 。