Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class. The emergence of risk-sensitive learning requires generalization guarantees for functionals of the loss distribution beyond the expectation. While prior works specialize in uniform convergence of particular functionals, our work provides uniform convergence for a general class of H\"older risk functionals for which the closeness in the Cumulative Distribution Function (CDF) entails closeness in risk. We establish the first uniform convergence results for estimating the CDF of the loss distribution, yielding guarantees that hold simultaneously both over all H\"older risk functionals and over all hypotheses. Thus licensed to perform empirical risk minimization, we develop practical gradient-based methods for minimizing distortion risks (widely studied subset of H\"older risks that subsumes the spectral risks, including the mean, conditional value at risk, cumulative prospect theory risks, and others) and provide convergence guarantees. In experiments, we demonstrate the efficacy of our learning procedure, both in settings where uniform convergence results hold and in high-dimensional settings with deep networks.
翻译:标准统一趋同结果将预期损失在假设等级上的普及差距捆绑在一起。风险敏感学习的出现要求为损失分布功能的普及提供超出预期的保障。虽然以前的工作专门致力于某些功能的统一趋同,但我们的工作为一般的H\'older风险功能类别提供了统一的趋同,而累计分布功能的接近意味着风险的密切。我们为估计损失分布的CDF制定了第一个统一趋同结果,从而产生同时存在于所有H\'older风险功能和所有假设之上的保证。因此,我们获得执行实验风险最小化的许可,我们开发了实用的梯度基方法,以尽量减少扭曲风险(广泛研究的H\'older风险子子集成光谱风险,包括平均、有条件风险值、累积前景理论风险和其他风险)并提供趋同保证。在实验中,我们展示了我们学习程序的效力,无论是在统一趋同结果形成的情况下,还是在与深网络的高维环境中。