We initiate a study of computable online (c-online) learning, which we analyze under varying requirements for "optimality" in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of "optimality" and a generalization of Littlestone's Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.
翻译:我们开始一项可计算在线(c-在线)学习的研究,我们根据对“最佳”的错误约束的不同要求对“最佳”的“最佳”进行分析。我们的主要贡献是给最佳的“在线”学习提供一个必要和充分的条件。我们的主要贡献是给最佳的“在线”学习提供一个必要和充分的条件,并表明“小石”层面不再具有C-在线学习的最佳错误约束特征。此外,我们引入了“最理想”的“最佳”在线学习,更自然地将“最理想”和“小石”标准最佳“最佳”水平概念化。我们展示了在“最佳”和“最佳”在线学习之间存在的计算分离,证明最佳的在线学习在计算上更为困难。最后,我们考虑的在线学习没有优化要求,在较弱的兼容性概念下,显示“小石”层面的有限性不再决定某一类是否可在线学习,但有一定的错误约束。我们通过探索“在线”和“CPAC”学习之间的关系来加强这一结果的潜在途径,即探索“最佳”与“最佳”学习的“最佳”和“最佳”学习,我们表明“在线学习是困难的。