Offset Rademacher complexities have been shown to provide tight upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that the offset complexity can be generalized to any loss that satisfies a certain general convexity condition. Further, we show that this condition is closely related to both exponential concavity and self-concordance, unifying apparently disparate results. By a novel geometric argument, many of our bounds translate to improper learning in a non-convex class with Audibert's star algorithm. Thus, the offset complexity provides a versatile analytic tool that covers both convex empirical risk minimization and improper learning under entropy conditions. Applying the method, we recover the optimal rates for proper and improper learning with the $p$-loss for $1 < p < \infty$, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.
翻译:Rademacher 的复杂情况表明,在包括不适当的统计学习和在线学习在内的一系列广泛问题中,平方损失为方形损失提供了紧紧的上限。我们表明,抵消的复杂情况可以普遍适用于满足某种一般凝固条件的任何损失。此外,我们表明,这一条件与指数混凝土和自相协调密切相关,使明显不同的结果相互融合。根据一个新的几何论论,我们的许多界限转化为与Audibert的恒星算法在非凝固类中不适当的学习。因此,抵消的复杂情况提供了一个多功能分析工具,既包括锥体实验风险最小化,也包括在昆虫条件下不适当的学习。运用这种方法,我们用1美元 < p < \infty$的美元损失恢复了适当和不适当的学习的最佳费率,并表明,不适当的实验风险最小化变体可以达到物流回归和其他普遍线性模型的快速率。