Efficient and practical representation of geometric data is a ubiquitous problem for several applications in geometry processing. A widely used choice is to encode the 3D objects through their spectral embedding, associating to each surface point the values assumed at that point by a truncated subset of the eigenfunctions of a differential operator (typically the Laplacian). Several attempts to define new, preferable embeddings for different applications have seen the light during the last decade. Still, the standard Laplacian eigenfunctions remain solidly at the top of the available solutions, despite their limitations, such as being limited to near-isometries for shape matching. Recently, a new trend shows advantages in learning substitutes for the Laplacian eigenfunctions. At the same time, many research questions remain unsolved: are the new bases better than the LBO eigenfunctions, and how do they relate to them? How do they act in the functional perspective? And how to exploit these bases in new configurations in conjunction with additional features and descriptors? In this study, we properly pose these questions to improve our understanding of this emerging research direction. We show their applicative relevance in different contexts revealing some of their insights and exciting future directions.
翻译:几何数据的有效表述是几何处理中若干应用中普遍存在的问题。 广泛使用的一种选择是通过光谱嵌入将三维对象编码为三维对象,将差分操作员(通常是拉普拉西亚)的精密机能子集在那个点上假设的数值与每个表面点联系起来。 试图为不同应用定义新的、更可取的嵌入器的尝试在过去十年中看到了光线。 标准拉普拉西亚机能如何从功能角度出发? 如何在现有解决方案的顶端牢固地利用这些基础, 并附加更多的特征和描述性来配合这些特征和描述性? 最近,一个新趋势显示学习Laplacian机能替代品的优势。 与此同时,许多研究问题仍未解开:新基础比LBOeigendices更好吗? 它们是如何从功能角度出发的? 如何在新的配置中利用这些基础以及新的特征和描述性描述性? 在这项研究中,我们正确地展示了这些未来方向的深刻认识。