Active learning frameworks offer efficient data annotation without remarkable accuracy degradation. In other words, active learning starts training the model with a small size of labeled data while exploring the space of unlabeled data in order to select most informative samples to be labeled. Generally speaking, representing the uncertainty is crucial in any active learning framework, however, deep learning methods are not capable of either representing or manipulating model uncertainty. On the other hand, from the real world application perspective, uncertainty representation is getting more and more attention in the machine learning community. Deep Bayesian active learning frameworks and generally any Bayesian active learning settings, provide practical consideration in the model which allows training with small data while representing the model uncertainty for further efficient training. In this paper, we briefly survey recent advances in Bayesian active learning and in particular deep Bayesian active learning frameworks.
翻译:换句话说,积极的学习框架可以提供有效的数据说明,而不会显著的准确性退化。换句话说,积极的学习开始对模型进行带有少量标签数据的培训,同时探索无标签数据的空间,以便选择需要贴上标签的多数信息样本。一般而言,在任何积极的学习框架内,代表不确定性的深层次学习方法都不能够代表或操纵模型的不确定性。另一方面,从现实世界应用的角度来看,不确定性的代表性在机器学习界越来越受到关注。深巴耶斯积极的学习框架,一般而言,任何巴耶斯积极的学习环境都提供实际的考虑,在模型中允许用少量数据进行培训,同时代表模式不确定性,以便进一步进行有效的培训。在本文件中,我们简要地介绍了巴伊西亚积极学习,特别是深入巴伊西亚积极的学习框架的最新进展。