As a predictor's quality is often assessed by means of its risk, it is natural to regard risk consistency as a desirable property of learning methods, and many such methods have indeed been shown to be risk consistent. The first aim of this paper is to establish the close connection between risk consistency and $L_p$-consistency for a considerably wider class of loss functions than has been done before. The attempt to transfer this connection to shifted loss functions surprisingly reveals that this shift does not reduce the assumptions needed on the underlying probability measure to the same extent as it does for many other results. The results are applied to regularized kernel methods such as support vector machines.
翻译:关于$L_p$与风险一致性之间的关系及其对正则化核方法的影响
由于预测器的质量经常通过其风险进行评估,因此将风险一致性视为学习方法的理想属性是很自然的,事实上,许多此类方法已被证明是风险一致的。本文的第一个目的是在比以前更广泛的损失函数类别中建立风险一致性与$L_p$-一致性之间的密切联系。转移此联系到移位损失函数上的尝试惊人地揭示出,与许多其他结果一样,此移位并不能像在减小基础概率测度假定方面有效那样。结果应用于正则化核方法,例如支持向量机。