Ensembles of independently trained neural networks are a state-of-the-art approach to estimate predictive uncertainty in Deep Learning, and can be interpreted as an approximation of the posterior distribution via a mixture of delta functions. The training of ensembles relies on non-convexity of the loss landscape and random initialization of their individual members, making the resulting posterior approximation uncontrolled. This paper proposes a novel and principled method to tackle this limitation, minimizing an $f$-divergence between the true posterior and a kernel density estimator in a function space. We analyze this objective from a combinatorial point of view, and show that it is submodular with respect to mixture components for any $f$. Subsequently, we consider the problem of greedy ensemble construction, and from the marginal gain of the total objective, we derive a novel diversity term for ensemble methods. The performance of our approach is demonstrated on computer vision out-of-distribution benchmarks in a range of architectures trained on multiple datasets. The source code of our method is publicly available at https://github.com/MIPT-Oulu/greedy_ensembles_training.
翻译:独立训练的神经网络群集,是用来估计深层学习中预测不确定性的最先进方法,可被解释为通过混合三角函数来近似事后分布的近似值。 集合的训练依赖于损失的不稳定性以及个别成员的随机初始化,使得由此产生的后近近近近不受控制。 本文提出一个创新和原则性的方法来应对这一限制, 尽量减少在功能空间真正的后部和内核密度估计器之间的差异。 我们从组合的角度分析这一目标, 并表明它对于任何f$的混合物组成部分而言是次式的。 随后, 我们考虑贪婪的共性构造问题, 并从总目标的边际收益中得出一个新颖的多样性术语。 我们的方法表现表现在一系列关于多个数据集的培训的建筑群中, 计算机视野超出分配基准。 我们方法的源代码在 https://gulu- imbles/comptection.