Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the offset complexity upper bound can be generalized to any loss satisfying a certain uniform convexity condition. Amazingly, this condition is shown to also capture exponential concavity and self-concordance, uniting several apparently disparate results. By a unified geometric argument, these bounds translate directly to improper learning in a non-convex class using Audibert's "star algorithm." As applications, we recover the optimal rates for proper and improper learning with the $p$-loss, $1 < p < \infty$, closing the gap for $p > 2$, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.
翻译:Rademacher 的复杂情况表明,在包括不适当的统计学习和在线学习在内的一系列广泛问题中,平方损失的高度是尖锐的、数据依赖的。我们表明,在统计环境中,抵消的复杂程度可以普遍化为任何符合某种统一凝固条件的损失。令人惊讶的是,这一条件还显示出指数性混凝土和自相符合,将若干明显不同的结果结合在一起。通过统一的几何论,这些界限直接转化为非凝固类中不适当的学习,使用Audibert的“星级算法 ” 。作为应用,我们用美元损失、1美元 < p < incenty$ > 来恢复适当和不适当的学习的最佳费率,缩小2美元的差额,并表明不适当的实验风险最小化的变种可以达到快速的后勤倒退率和其他通用线性模型。