Ensembles of independently trained neural networks are a state-of-the-art approach to estimate predictive uncertainty in Deep Learning, and can be interpreted as an approximation of the posterior distribution via a mixture of delta functions. The training of ensembles relies on non-convexity of the loss landscape and random initialization of their individual members, making the resulting posterior approximation uncontrolled. This paper proposes a novel and principled method to tackle this limitation, minimizing an $f$-divergence between the true posterior and a kernel density estimator in a function space. We analyze this objective from a combinatorial point of view, and show that it is submodular with respect to mixture components for any $f$. Subsequently, we consider the problem of ensemble construction, and from the marginal gain of the total objective, we derive a novel diversity term for training ensembles greedily. The performance of our approach is demonstrated on computer vision out-of-distribution detection benchmarks in a range of architectures trained on multiple datasets. The source code of our method is publicly available at https://github.com/MIPT-Oulu/greedy_ensembles_training.
翻译:独立训练的神经网络群集,是评估深层学习预测不确定性的最先进方法,可被解释为通过混合三角函数来近似后端分布的近似值。 集合的培训依赖于损失的不混杂的景观和个别成员随机初始化,使得由此产生的后近近近近不受到控制。 本文提出了一个创新和原则性的方法来应对这一限制, 尽量减少在功能空间真实的后部和内核密度估计器之间的差异。 我们从组合的角度分析这一目标, 并表明它对于任何f$的混合物组成部分而言是次式的。 随后, 我们考虑共同建筑问题, 并从总目标的边际收益中得出一个新的多样性术语, 用来进行贪婪的训练。 我们的方法表现在计算机视野外分配检测基准上展示了在多个数据集培训的建筑群中。 我们方法的源代码可在 httpsmsm/grestraining_Ogresbeth.com上公开查阅。 我们的方法源代码可在 httpsmusmulual_Ogrestrain_ ambth.