We propose PR-RRN, a novel neural-network based method for Non-rigid Structure-from-Motion (NRSfM). PR-RRN consists of Residual-Recursive Networks (RRN) and two extra regularization losses. RRN is designed to effectively recover 3D shape and camera from 2D keypoints with novel residual-recursive structure. As NRSfM is a highly under-constrained problem, we propose two new pairwise regularization to further regularize the reconstruction. The Rigidity-based Pairwise Contrastive Loss regularizes the shape representation by encouraging higher similarity between the representations of high-rigidity pairs of frames than low-rigidity pairs. We propose minimum singular-value ratio to measure the pairwise rigidity. The Pairwise Consistency Loss enforces the reconstruction to be consistent when the estimated shapes and cameras are exchanged between pairs. Our approach achieves state-of-the-art performance on CMU MOCAP and PASCAL3D+ dataset.
翻译:我们提议采用基于非硬性结构的新型神经网络方法PR-RRN。PR-RRN由残余-稳定网络(RRN)和两个额外的正规化损失组成。RRN旨在有效地从具有新型残余-稳定结构的2D关键点中恢复3D形状和摄像头。由于NRSfM是一个高度受限制的问题,我们提议两种新的双向规范化,以进一步规范重建。基于硬性型的双向竞争损失通过鼓励高硬性框架对合体比低硬性对对更相似的表示来规范形状代表。我们提出衡量对称僵硬性的最低单值比率。高性隐性损失使得重建在估计形状和照相机相互交换时保持一致。我们的方法在CMU MOC和 PASACL3D+数据集上取得了最先进的表现。我们的方法实现了CMU MOC和 PASACL3D+数据集方面的最新表现。