We focus on using the predictive uncertainty signal calculated by Bayesian neural networks to guide learning in the self-same task the model is being trained on. Not opting for costly Monte Carlo sampling of weights, we propagate the approximate hidden variance in an end-to-end manner, throughout a variational Bayesian adaptation of a ResNet with attention and squeeze-and-excitation blocks, in order to identify data samples that should contribute less into the loss value calculation. We, thus, propose uncertainty-aware, data-specific label smoothing, where the smoothing probability is dependent on this epistemic uncertainty. We show that, through the explicit usage of the epistemic uncertainty in the loss calculation, the variational model is led to improved predictive and calibration performance. This core machine learning methodology is exemplified at wildlife call detection, from audio recordings made via passive acoustic monitoring equipment in the animals' natural habitats, with the future goal of automating large scale annotation in a trustworthy manner.
翻译:我们注重使用贝叶西亚神经网络计算出的预测性不确定性信号来指导该模型正在培训的自我任务中的学习。我们不选择昂贵的蒙特卡洛加权抽样,而是以端对端的方式,通过对累赘网进行多变的巴耶斯调整,关注并用挤压和刺激块块来传播隐含的大致差异,以便确定应较少有助于计算损失价值的数据样本。因此,我们提议对不确定性有认识的、数据特定标签的平稳化,因为平稳的概率取决于这种隐性不确定性。我们表明,通过在计算损失时明确使用缩略图不确定性,变异模型导致预测和校准性能的改善。这种核心机器学习方法体现在野生动物呼叫探测中,从动物自然栖息地的被动声学监测设备中录下来的录音记录,未来的目标是以可靠的方式实现大规模注解的自动化。