Empowering autonomous agents with 3D understanding for daily objects is a grand challenge in robotics applications. When exploring in an unknown environment, existing methods for object pose estimation are still not satisfactory due to the diversity of object shapes. In this paper, we propose a novel framework for category-level object shape and pose estimation from a single RGB-D image. To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space, which is the key to establish reliable correspondences between observed point clouds and estimated shapes. Then, by using a SIM(3)-invariant shape descriptor, we gracefully decouple the shape and pose of an object, thus supporting latent shape optimization of target objects in arbitrary poses. Extensive experiments show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset. Code and video are available at https://zju3dv.github.io/gCasp
翻译:在未知环境中,由于物体形状的多样性,现有物体估计方法仍然不能令人满意。在本文件中,我们提议了一个分类物体形状的新框架,并从一个 RGB-D 图像中作出估计。为了处理类别内变异,我们采用了一种语义原始表示法,将不同形状编码成一个统一的潜伏空间,这是在观察到的点云和估计形状之间建立可靠对应关系的关键。然后,我们通过使用SIM(3)-异性形状描述符,优雅地将物体的形状和形状分离起来,从而支持任意形状中目标物体的潜在形状优化。广泛的实验表明,拟议的方法实现SOTA在现实世界数据集中具有估计性能和更好的概括性。代码和视频见https://zju3dv.github.io/gCasp。