The aim of boosting is to convert a sequence of weak learners into a strong learner. At their heart, these methods are fully sequential. In this paper, we investigate the possibility of parallelizing boosting. Our main contribution is a strong negative result, implying that significant parallelization of boosting requires an exponential blow-up in the total computing resources needed for training.
翻译:提升算法的目的是将一系列弱学习器转化为强学习器。这些方法的核心是完全顺序的。在本文中,我们研究了提升算法并行化的可能性。我们的主要贡献是强烈的否定结果,这意味着大规模的提升算法并行化需要指数级的计算资源才能进行训练。