We study different aspects of active learning with deep neural networks in a consistent and unified way. i) We investigate incremental and cumulative training modes which specify how the newly labeled data are used for training. ii) We study active learning w.r.t. the model configurations such as the number of epochs and neurons as well as the choice of batch size. iii) We consider in detail the behavior of query strategies and their corresponding informativeness measures and accordingly propose more efficient querying procedures. iv) We perform statistical analyses, e.g., on actively learned classes and test error estimation, that reveal several insights about active learning. v) We investigate how active learning with neural networks can benefit from pseudo-labels as proxies for actual labels.
翻译:我们以一致和统一的方式研究与深层神经网络积极学习的不同方面。 (一) 我们调查渐进式和累积式培训模式,具体说明新标签的数据如何用于培训。 (二) 我们研究活跃式学习模式配置,如神经元和神经元的数量以及批量大小的选择。 (三) 我们详细审议查询战略的行为及其相应的信息量度度措施,并据此提出更有效的查询程序。 (四) 我们进行统计分析,例如积极学习的课程和测试错误估计,揭示关于积极学习的若干洞察力。 (五) 我们调查神经网络的积极学习如何受益于假标签作为实际标签的代名。