We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting. In particular, we introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages the powerful representation of NNs for both exploitation and exploration, has the query decision-maker tailored for $k$-class classification problems with the performance guarantee, utilizes the full feedback, and updates parameters in a more practical and efficient manner. These careful designs lead to an instance-dependent regret upper bound, roughly improving by a multiplicative factor $O(\log T)$ and removing the curse of input dimensionality. Furthermore, we show that the algorithm can achieve the same performance as the Bayes-optimal classifier in the long run under the hard-margin setting in classification problems. In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance.
翻译:我们为非参数流设置改进基于神经网络(NN)的积极学习算法的理论和经验表现。 特别是,我们引入了两种遗憾衡量法,即尽可能减少在积极学习方面比在最新工艺(SOTA)相关工作中更适于积极学习的人口损失。 然后,提议的算法利用NN的强大代表性进行开发和探索,在业绩保证方面有针对美元等级分类问题的查询决策者,利用全面反馈,并以更实际和更有效的方式更新参数。这些仔细设计导致依赖实例的遗憾上限,大致上通过一个倍增系数(O/log T)改善,并消除输入维度的诅咒。 此外,我们表明,在分类问题的硬界限设置下,该算法长期可以实现与Bayes-opatimal分类仪相同的性表现。 最后,我们利用广泛的实验来评估拟议的算法和SOTA基准,以显示改进的经验性表现。