Most learning methods for 3D data (point clouds, meshes) suffer significant performance drops when the data is not carefully aligned to a canonical orientation. Aligning real world 3D data collected from different sources is non-trivial and requires manual intervention. In this paper, we propose the Adjoint Rigid Transform (ART) Network, a neural module which can be integrated with existing 3D networks to significantly boost their performance in tasks such as shape reconstruction, non-rigid registration, and latent disentanglement. ART learns to rotate input shapes to a canonical orientation that is crucial for a lot of tasks. ART achieves this by imposing rotation equivariance constraint on input shapes. The remarkable result is that with only self-supervision, ART can discover a unique canonical orientation for both rigid and nonrigid objects, which leads to a notable boost in downstream task performance. We will release our code and pre-trained models for further research.
翻译:3D数据( 点云、 线形) 的大多数学习方法在数据不小心与圆柱形方向一致时会遭遇显著的性能下降。 从不同来源收集的真实世界的 3D 数据对齐是非三维数据, 需要人工干预。 在本文中, 我们提议了 Adjoint Rigid 变换( ART) 网络, 这个神经模块可以与现有的 3D 网络整合, 以大大提升其在诸如形状重建、 非硬体登记和潜伏脱钩等任务中的性能。 ART 学会了将输入形状旋转到对于许多任务至关重要的圆柱形方向。 ART 通过对输入形状实行旋转等同限制来实现这一点。 显著的结果是, 只有自我监督, ART 才能发现对硬体和非硬体物体的独特性向导方向, 从而显著提升下游任务性能。 我们将发布我们的代码和预训练模型, 以供进一步研究 。