In this work, we consider the setting of learning problems under a wide class of spectral risk (or "L-risk") functions, where a Lipschitz-continuous spectral density is used to flexibly assign weight to extreme loss values. We obtain excess risk guarantees for a derivative-free learning procedure under unbounded heavy-tailed loss distributions, and propose a computationally efficient implementation which empirically outperforms traditional risk minimizers in terms of balancing spectral risk and misclassification error.
翻译:在这项工作中,我们考虑在广泛的光谱风险(或“L-风险”)功能类别下设置学习问题,即使用利普西茨连续光谱密度来灵活分配极端损失值的权重。 我们在无限制的重尾损失分布下,为无衍生物的无衍生物学习程序获得超额风险保证,并提出一种在平衡光谱风险和错误分类错误方面在经验上优于传统风险最小化器的计算高效实施方案。