We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting. In particular, we introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages the powerful representation of NNs for both exploitation and exploration, has the query decision-maker tailored for $k$-class classification problems with the performance guarantee, utilizes the full feedback, and updates parameters in a more practical and efficient manner. These careful designs lead to a better regret upper bound, improving by a multiplicative factor $O(\log T)$ and removing the curse of both input dimensionality and the complexity of the function to be learned. Furthermore, we show that the algorithm can achieve the same performance as the Bayes-optimal classifier in the long run under the hard-margin setting in classification problems. In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance.
翻译:我们改进了基于神经网络(NN)的积极学习算法的理论和实证表现,用于非参数流设置。特别是,我们引入了两种遗憾度,即尽量减少与最新工艺(SOTA)相关工作相比更适合积极学习的人口损失。然后,拟议的算法将NNP的强大代表性用于开发和勘探,使查询决策者在业绩保障方面针对美元-美元分类问题进行量身定做,利用全面反馈,并以更实际和更有效的方式更新参数。这些谨慎设计导致更后悔,通过多倍乘因数$(log T)改善人口损失,并消除投入维度和所要学习功能复杂性的诅咒。此外,我们表明,在分类问题的硬差设置下,长期的算法可以取得与Bayes-optimal分类仪相同的性表现。最后,我们利用广泛的实验来评估拟议的算法和SOTA基线,以显示改进的经验性表现。