In computer vision, it is often observed that formulating regression problems as a classification task often yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the cross-entropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy loss to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.
翻译:在计算机愿景中,人们常常发现,将回归问题作为分类任务提出往往会产生更好的效果。我们调查了这一奇怪的现象,并提供了一种推论,以表明随着跨热带作物损失的发生,该分类的回归效果超过其学习高热带特征表现的能力中的平均平方误差损失。根据分析,我们提议了一种正方正方方位酶损失,以鼓励高渗透热带特征空间,同时保持正方位关系,以改进回归任务的业绩。合成和真实世界回归任务的实验表明增加回溯作用的重要性和好处。</s>