We introduce a novel neural representation for maps between 3D shapes based on flow-matching models, which is computationally efficient and supports cross-representation shape matching without large-scale training or data-driven procedures. 3D shapes are represented as the probability distribution induced by a continuous and invertible flow mapping from a fixed anchor distribution. Given a source and a target shape, the composition of the inverse flow (source to anchor) with the forward flow (anchor to target), we continuously map points between the two surfaces. By encoding the shapes with a pointwise task-tailored embedding, this construction provides an invertible and modality-agnostic representation of maps between shapes across point clouds, meshes, signed distance fields (SDFs), and volumetric data. The resulting representation consistently achieves high coverage and accuracy across diverse benchmarks and challenging settings in shape matching. Beyond shape matching, our framework shows promising results in other tasks, including UV mapping and registration of raw point cloud scans of human bodies.
翻译:我们提出了一种基于流匹配模型的三维形状间映射的新型神经表示方法,该方法计算高效,支持跨表示形式的形状匹配,无需大规模训练或数据驱动过程。三维形状被表示为通过从固定锚点分布进行连续可逆流映射所诱导的概率分布。给定源形状和目标形状,通过组合逆向流(源到锚点)与正向流(锚点到目标),我们可在两个表面之间连续映射点。通过使用逐点任务定制的嵌入对形状进行编码,该构建提供了跨点云、网格、符号距离场(SDF)和体数据等多种模态的形状间映射的可逆且与模态无关的表示。所得表示在形状匹配的多样化基准测试和挑战性场景中均能持续实现高覆盖率和准确性。除形状匹配外,我们的框架在其他任务中也展现出有前景的结果,包括人体原始点云扫描的UV映射和配准。