We present a differentiable approach to learn the probabilistic factors used for inference by a nonparametric belief propagation algorithm. Existing nonparametric belief propagation methods rely on domain-specific features encoded in the probabilistic factors of a graphical model. In this work, we replace each crafted factor with a differentiable neural network enabling the factors to be learned using an efficient optimization routine from labeled data. By combining differentiable neural networks with an efficient belief propagation algorithm, our method learns to maintain a set of marginal posterior samples using end-to-end training. We evaluate our differentiable nonparametric belief propagation (DNBP) method on a set of articulated pose tracking tasks and compare performance with a recurrent neural network. Results from this comparison demonstrate the effectiveness of using learned factors for tracking and suggest the practical advantage over hand-crafted approaches. The project webpage is available at: progress.eecs.umich.edu/projects/dnbp.
翻译:我们提出了一个不同的方法来学习非对称信仰传播算法用于推断的概率因素。现有的非对称信仰传播方法依赖于在图形模型概率因素的概率因素中编码的域特性。在这项工作中,我们用一个不同的神经网络来取代每个编造的因子,以便利用标签数据中高效优化的例行程序来学习各种因子。通过将不同的神经网络与有效的信仰传播算法结合起来,我们的方法学会使用端到端培训来维持一套边际的后继样本。我们评估了一套清晰的外观跟踪任务(DNBP)的可区分的非对称信仰传播方法(DNBP),并与一个经常性的神经网络进行比较。这种比较的结果表明,使用学习因素跟踪和提出对手制方法的实际优势是有效的。项目网页的网址是:进步.eecs.umich.edu/project/dnbp。