Parametrizations of data manifolds in shape spaces can be computed using the rich toolbox of Riemannian geometry. This, however, often comes with high computational costs, which raises the question if one can learn an efficient neural network approximation. We show that this is indeed possible for shape spaces with a special product structure, namely those smoothly approximable by a direct sum of low-dimensional manifolds. Our proposed architecture leverages this structure by separately learning approximations for the low-dimensional factors and a subsequent combination. After developing the approach as a general framework, we apply it to a shape space of triangular surfaces. Here, typical examples of data manifolds are given through datasets of articulated models and can be factorized, for example, by a Sparse Principal Geodesic Analysis (SPGA). We demonstrate the effectiveness of our proposed approach with experiments on synthetic data as well as manifolds extracted from data via SPGA.
翻译:形状空间中的数据元的对称性可以使用Riemannian几何学的丰富工具箱来计算。 然而,这往往会产生高计算成本,这就提出了这样一个问题,即人们能否学习到高效神经网络近似值。我们表明,对于具有特殊产品结构的形状空间,即那些能被低维数直接和低维数相匹配的形状空间,这确实是可能的。我们提议的建筑通过分别学习低维因素的近似值和随后的组合来利用这一结构。在将这一方法发展成一个总体框架之后,我们将其应用到三角表面的形状空间。在这里,数据元的典型例子是通过清晰模型的数据集提供的,并且可以通过一个粗化的主地貌分析(SPGA)来进行参数化。我们用合成数据实验和通过SPGA从数据中提取的元数据来证明我们所提议的方法的有效性。</s>