Robust loss functions are essential for training deep neural networks with better generalization power in the presence of noisy labels. Symmetric loss functions are confirmed to be robust to label noise. However, the symmetric condition is overly restrictive. In this work, we propose a new class of loss functions, namely \textit{asymmetric loss functions}, which are robust to learning with noisy labels for various types of noise. We investigate general theoretical properties of asymmetric loss functions, including classification calibration, excess risk bound, and noise tolerance. Meanwhile, we introduce the asymmetry ratio to measure the asymmetry of a loss function. The empirical results show that a higher ratio would provide better noise tolerance. Moreover, we modify several commonly-used loss functions and establish the necessary and sufficient conditions for them to be asymmetric. Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods. The code is available at \href{https://github.com/hitcszx/ALFs}{https://github.com/hitcszx/ALFs}
翻译:强力损失功能对于在噪音标签下培训具有较强普及力的深度神经网络至关重要。 确定对称损失功能对标签噪音具有很强的威力。 然而, 对称性条件限制性过强。 在这项工作中,我们提议了一种新的损失功能类别, 即\ textit{ asymecty损失功能}, 这对于学习使用噪音标签对各种噪音进行强力学习是十分重要的。 我们调查不对称损失功能的一般理论属性, 包括分类校准、 超重风险约束和噪音耐受性。 同时, 我们引入不对称性比率, 以衡量损失功能的不对称性。 实证结果显示, 更高的比率可以提供更好的噪音耐受度。 此外, 我们修改了一些常用的损失功能, 并为它们建立不对称的必要和充分条件。 基准数据集的实验结果显示, 不对称损失功能可以超越状态方法。 该代码可在以下网站查阅:\f{https://github.com/hitsxx/ALFs=https://github.com/hics/ALF}。