We present a differentiable approach to learn the probabilistic factors used for inference by a nonparametric belief propagation algorithm. Existing nonparametric belief propagation methods rely on domain-specific features encoded in the probabilistic factors of a graphical model. In this work, we replace each crafted factor with a differentiable neural network enabling the factors to be learned using an efficient optimization routine from labeled data. By combining differentiable neural networks with an efficient belief propagation algorithm, our method learns to maintain a set of marginal posterior samples using end-to-end training. We evaluate our differentiable nonparametric belief propagation (DNBP) method on a set of articulated pose tracking tasks and compare performance with learned baselines. Results from these experiments demonstrate the effectiveness of using learned factors for tracking and suggest the practical advantage over hand-crafted approaches. The project webpage is available at: https://progress.eecs.umich.edu/projects/dnbp/ .
翻译:我们提出了一种不同的方法来学习非对称信仰传播算法用于推断的概率因素。现有的非对称信仰传播方法依赖于在图形模型概率因素的概率因素中编码的域特性。在这项工作中,我们用一个不同的神经网络来取代每个构思因素,以便利用标签数据中的高效优化常规来学习各种因素。通过将不同的神经网络与有效的信仰传播算法结合起来,我们的方法学会使用端到端的培训来维持一套边际的后继样本。我们评估了一套可区分的非参数信仰传播方法(DNBP),以一套清晰的外形跟踪任务为基础,并将业绩与所学基线进行比较。这些实验的结果表明,使用学习的因素来跟踪和提出对手动方法的实际优势是有效的。项目网页见:https://preseration.eecs.umich.edu/project/dnbp/。</s>