The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence. In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers in this context. We present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.
翻译:利用大规模硬件平行能力是最近机械学习加速进展的关键推动因素之一,因此,在开发经典机器学习算法的高效平行变体方面投入了大量努力。然而,尽管在平行化方面知识丰富,但一些经典机器学习算法往往难以在保持趋同的同时有效地平行。在本文件中,我们侧重于在图形模型的推论关键机学习任务方面,特别是在基本信仰传播算法方面,高效平行算法。我们通过展示如何在这方面利用可伸缩的松绑者,应对这一典型模式的高效平行挑战。我们提出了广泛的实证研究,表明我们的方法在可伸缩性方面,在一系列实际应用上,在墙时的趋同时间方面,都比以往平行的信仰传播执行得更好。