The data processing inequality is an information-theoretic principle stating that the information content of a signal cannot be increased by processing the observations. In particular, it suggests that there is no benefit in enhancing the signal or encoding it before addressing a classification problem. This assertion can be proven to be true for the case of the optimal Bayes classifier. However, in practice, it is common to perform "low-level" tasks before "high-level" downstream tasks despite the overwhelming capabilities of modern deep neural networks. In this paper, we aim to understand when and why low-level processing can be beneficial for classification. We present a comprehensive theoretical study of a binary classification setup, where we consider a classifier that is tightly connected to the optimal Bayes classifier and converges to it as the number of training samples increases. We prove that for any finite number of training samples, there exists a pre-classification processing that improves the classification accuracy. We also explore the effect of class separation, training set size, and class balance on the relative gain from this procedure. We support our theory with an empirical investigation of the theoretical setup. Finally, we conduct an empirical study where we investigate the effect of denoising and encoding on the performance of practical deep classifiers on benchmark datasets. Specifically, we vary the size and class distribution of the training set, and the noise level, and demonstrate trends that are consistent with our theoretical results.
翻译:数据处理不等式是一条信息论原理,它指出信号的信息内容无法通过对观测值进行处理而增加。具体而言,它表明在解决分类问题之前增强信号或对其进行编码并无益处。对于最优贝叶斯分类器的情况,这一论断可以被证明是正确的。然而,在实践中,尽管现代深度神经网络具有强大的能力,在“高级”下游任务之前执行“低级”任务仍然很常见。在本文中,我们旨在理解低级处理何时以及为何对分类有益。我们对一个二元分类设置进行了全面的理论研究,其中我们考虑了一个与最优贝叶斯分类器紧密相连的分类器,并随着训练样本数量的增加而收敛于它。我们证明,对于任意有限数量的训练样本,都存在一种能够提高分类准确率的预分类处理。我们还探讨了类别分离度、训练集大小和类别平衡对该过程相对增益的影响。我们通过对理论设置的实证研究来支持我们的理论。最后,我们进行了一项实证研究,调查了去噪和编码在实际深度分类器在基准数据集上性能的影响。具体来说,我们改变了训练集的大小和类别分布以及噪声水平,并展示了与我们理论结果一致的趋势。