We study fast rates of convergence in the setting of nonparametric online regression, namely where regret is defined with respect to an arbitrary function class which has bounded complexity. Our contributions are two-fold: - In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal mistake bound in terms of the sequential fat-shattering dimension of the hypothesis class. In the setting of online classification with a class of Littlestone dimension $d$, our bound reduces to $d \cdot {\rm poly} \log T$. This result answers a question as to whether proper learners could achieve near-optimal mistake bounds; previously, even for online classification, the best known mistake bound was $\tilde O( \sqrt{dT})$. Further, for the real-valued (regression) setting, the optimal mistake bound was not even known for improper learners, prior to this work. - Using the above result, we exhibit an independent learning algorithm for general-sum binary games of Littlestone dimension $d$, for which each player achieves regret $\tilde O(d^{3/4} \cdot T^{1/4})$. This result generalizes analogous results of Syrgkanis et al. (2015) who showed that in finite games the optimal regret can be accelerated from $O(\sqrt{T})$ in the adversarial setting to $O(T^{1/4})$ in the game setting. To establish the above results, we introduce several new techniques, including: a hierarchical aggregation rule to achieve the optimal mistake bound for real-valued classes, a multi-scale extension of the proper online realizable learner of Hanneke et al. (2021), an approach to show that the output of such nonparametric learning algorithms is stable, and a proof that the minimax theorem holds in all online learnable games.
翻译:我们在设定非参数性在线回归时研究快速趋同率, 也就是说, 当对任意功能类感到遗憾, 且该功能类具有约束性复杂性时, 我们的评分分为两部分: - 在可实现的非参数性在线回归的同时, 我们提出一个随机化的适当学习算法, 它在假设类的连续脂肪抖动层面中被约束了近乎最佳的错误。 在设定一个小石块维度值的在线分类时, 我们的分界将降低到 $ d, 也就是对任意功能类任意功能类的分级表示遗憾。 (cd) 游戏的分级分级为 。 结果是, 正确的学生能否达到近最佳的分级差值 。 (c) 在这项工作之前, 最佳的分级和最高分级的分级的分级 。 (d) 最优分级的分级是, 最高分级的分级的分级是 。 (d) 在这项工作中, 最高分级的分级的分级是最低的分级 。 (d) 最高分级是最高分级的分级, 最高分级是最高分级的分级的分级, 最高分级的分级的分级的分级的分级的分级, 。