We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting. In particular, we introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages the powerful representation of NNs for both exploitation and exploration, has the query decision-maker tailored for $k$-class classification problems with the performance guarantee, utilizes the full feedback, and updates parameters in a more practical and efficient manner. These careful designs lead to a better regret upper bound, improving by a multiplicative factor $O(\log T)$ and removing the curse of input dimensionality. Furthermore, we show that the algorithm can achieve the same performance as the Bayes-optimal classifier in the long run under the hard-margin setting in classification problems. In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance.
翻译:我们为非参数流设置改进基于神经网络(NN)的积极学习算法的理论和经验表现。 特别是,我们引入了两种遗憾衡量法,即尽可能减少在积极学习方面比在最先进(SOTA)相关工作中更适于积极学习的人口损失。 然后,提议的算法利用NN的强大代表性进行开采和勘探,使查询决策者针对以美元为单位的绩效保障分类问题,利用全面反馈,并以更实际和更有效的方式更新参数。这些谨慎设计导致更后悔的上限,通过多复制因数$(log T)来改进,并消除投入维度的诅咒。 此外,我们表明,在分类问题中,长期而言,该算法可以取得与Bayes-optimal分类仪相同的性能。 最后,我们利用广泛的实验来评估拟议的算法和SOTA基线,以显示改进的经验性表现。