We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise. Leveraging its variational form, we derive a nice decoupling property for a family of $f$-divergence measures when label noise presents, where the divergence is shown to be a linear combination of the variational difference defined on the clean distribution and a bias term introduced due to the noise. The above derivation helps us analyze the robustness of different $f$-divergence functions. With established robustness, this family of $f$-divergence functions arises as useful metrics for the problem of learning with noisy labels, which do not require the specification of the labels' noise rate. When they are possibly not robust, we propose fixes to make them so. In addition to the analytical results, we present thorough experimental evidence. Our code is available at https://github.com/UCSC-REAL/Robust-f-divergence-measures.
翻译:当对分类器的预测和受监督的标签采用最适当定义的美元差异度量度措施时,我们显示,在对分类器的预测和受监督的标签采用最精确定义的美元差异度量度措施时,我们显示,在标签噪音出现时,我们为一个以美元差异度量表示的家庭中,我们得出了一个很好的脱钩性属性,当标签噪音出现时,这种差异被显示为在清洁分布上界定的差异的线性组合,以及由于噪音而引入的偏差术语。上述衍生有助于我们分析不同的美元差异度量度值功能的稳健性。在既定的稳健性下,这种以美元值值值值计算的功能成为与噪音标签学习问题的有用衡量标准,而这种差异并不要求标签的噪声率的规格。当这些差异可能不稳时,我们建议进行纠正,除了分析结果外,我们还提出彻底的实验证据。我们的代码可在 https://github.com/UCSS-REAL/Robust-f-divergence-rence-degence-度度度度量度测量中查阅。