Many novel notions of "risk" (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. We study a complementary new risk class that penalizes loss deviations in a bi-directional manner, while having more flexibility in terms of tail sensitivity than is offered by mean-variance. This class lets us derive high-probability learning guarantees without explicit gradient clipping, and empirical tests using both simulated and real data illustrate a high degree of control over key properties of the test loss distribution incurred by gradient-based learners.
翻译:已经提出并研究了许多“风险”的新概念(如CVaR、倾斜风险、DRO风险),但这些风险至少都与从上到下失去尾巴的用意一样敏感,而且往往忽视下滑的偏差。 我们研究的是一种补充性的新风险类别,它以双向方式惩罚损失偏差,同时在尾部敏感度方面比中度偏差更灵活。 这个类别让我们获得高概率学习保障,而没有明显的梯度剪切,而使用模拟数据和真实数据的经验测试表明对梯度学习者测试损失分布的关键特性的高度控制。