In machine learning, an agent needs to estimate uncertainty to efficiently explore and adapt and to make effective decisions. A common approach to uncertainty estimation maintains an ensemble of models. In recent years, several approaches have been proposed for training ensembles, and conflicting views prevail with regards to the importance of various ingredients of these approaches. In this paper, we aim to address the benefits of two ingredients -- prior functions and bootstrapping -- which have come into question. We show that prior functions can significantly improve an ensemble agent's joint predictions across inputs and that bootstrapping affords additional benefits if the signal-to-noise ratio varies across inputs. Our claims are justified by both theoretical and experimental results.
翻译:在机器学习中,代理人需要估计不确定性,以便有效地探索和适应,并作出有效的决定。一种共同的不确定性估计方法维持着一套模型。近年来,为培训组合提出了几种方法,关于这些方法的各种要素的重要性,意见相左。在本文件中,我们的目标是处理两个因素 -- -- 先前的功能和靴子 -- -- 的效益,这已经引起疑问。我们表明,先前的功能可以大大改进联合剂对各种投入的联合预测,如果各种投入的信号对噪音比率不同,靴子可以带来额外的效益。我们的主张有理论和实验结果。