We initiate the study of smoothed analysis for the sequential probability assignment problem with contexts. We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle. Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning. This leads to optimal (logarithmic) fast rates for parametric classes and classes with finite VC dimension. On the algorithmic front, we develop an algorithm that efficiently taps into the MLE oracle, for general classes of functions. We show that under general conditions this algorithmic approach yields sublinear regret.
翻译:我们开始研究相继概率分配问题的平滑分析。 我们研究信息- 理论上的最佳最小速率, 以及包含最大概率估测器的算法减缩框架。 我们的方法规定,平滑的对手的相继概率分配从迷你速率降低到传输学习的微速率, 从而导致具有有限 VC 维度的参数类和类的最佳( logaric) 速率。 在算法方面, 我们开发了一种算法, 有效地进入 MLE oracle, 用于一般功能类别。 我们显示, 在一般情况下, 这种算法方法会产生亚线性遗憾 。</s>