Transformation Synchronization is the problem of recovering absolute transformations from a given set of pairwise relative motions. Despite its usefulness, the problem remains challenging due to the influences from noisy and outlier relative motions, and the difficulty to model analytically and suppress them with high fidelity. In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization. Unlike previous works which use complicated multi-stage pipelines, we use an iterative approach where each step consists of a single weight-shared message passing layer that refines the absolute poses from the previous iteration by predicting an incremental update in the tangent space. To reduce the influence of outliers, the messages are weighted before aggregation. Our iterative approach alleviates the need for an explicit initialization step and performs well with identity initial poses. Although our approach is simple, we show that it performs favorably against existing handcrafted and learned synchronization methods through experiments on both SO(3) and SE(3) synchronization.
翻译:同步化是从一组特定对称相对动议中恢复绝对变异的问题。 尽管它很有用, 这个问题仍然具有挑战性, 原因是来自吵闹和异端相对动议的影响, 以及分析模型和高忠诚度抑制它们的困难。 在这项工作中, 我们避免手工制作强大的损失功能, 并提议使用图形神经网络( GNN) 学习变异同步。 与以往使用复杂多阶段管道的工程不同, 我们使用迭接方法, 每一步都包含一个单重共享的信息传递层, 通过预测正切空间的递增更新来完善先前变异的绝对成份。 为了减少外端的影响力, 电文在汇总前是加权的。 我们的迭接式方法减轻了对明确初始化步骤的需求, 并用身份初始显示良好。 尽管我们的方法很简单, 我们通过SO(3) 和 SE(3) 同步 的实验, 显示它比现有的手工制作和学习的同步方法要好。