When perceiving the world from multiple viewpoints, humans have the ability to reason about the complete objects in a compositional manner even when the object is completely occluded from partial viewpoints. Meanwhile, humans can imagine the novel views after observing multiple viewpoints. The remarkable recent advance in multi-view object-centric learning leaves some problems: 1) the partially or completely occluded shape of objects can not be well reconstructed. 2) the novel viewpoint prediction depends on expensive viewpoint annotations rather than implicit view rules. This makes the agent fail to perform like humans. In this paper, we introduce a time-conditioned generative model for videos. To reconstruct the complete shape of the object accurately, we enhance the disentanglement between different latent representations: view latent representations are jointly inferred based on the Transformer and then cooperate with the sequential extension of Slot Attention to learn object-centric representations. The model also achieves the new ability: Gaussian processes are employed as priors of view latent variables for generation and novel-view prediction without viewpoint annotations. Experiments on multiple specifically designed synthetic datasets have shown that the proposed model can 1) make the video decomposition, 2) reconstruct the complete shapes of objects, and 3) make the novel viewpoint prediction without viewpoint annotations.
翻译:当从多种角度观察世界时,人类有能力以构思方式解释完整的物体,即使对象完全从局部角度被完全隐蔽。 同时,人类可以在观察多重观点后想象新观点。 多视角对象中心学习的近期显著进步留下一些问题:(1) 部分或完全隐蔽的物体形状无法很好地重建。 (2) 新的视觉预测取决于昂贵的观点说明,而不是隐含的视图规则。 这使得代理人无法像人类一样执行。 在本文中, 我们引入一个有时间条件的视频基因化模型。 要准确地重建物体的完整形状, 我们就可以加强不同潜在表达面之间的混乱: 查看潜在表达面是根据变异器共同推断的, 然后与斯洛特注意的顺序延伸合作, 学习对象中心表达面。 该模型还实现了新的能力: 高斯进程被用作生成和新观点预测的视觉潜在变量的前视线。 对多个专门设计的合成数据集的实验显示, 为了精确地重建对象的完整模型能够( 1) 使视频的图像变异定位3 。