The rising performance of deep neural networks is often empirically attributed to an increase in the available computational power, which allows complex models to be trained upon large amounts of annotated data. However, increased model complexity leads to costly deployment of modern neural networks, while gathering such amounts of data requires huge costs to avoid label noise. In this work, we study the ability of compression methods to tackle both of these problems at once. We hypothesize that quantization-aware training, by restricting the expressivity of neural networks, behaves as a regularization. Thus, it may help fighting overfitting on noisy data while also allowing for the compression of the model at inference. We first validate this claim on a controlled test with manually introduced label noise. Furthermore, we also test the proposed method on Facial Action Unit detection, where labels are typically noisy due to the subtlety of the task. In all cases, our results suggests that quantization significantly improve the results compared with existing baselines, regularization as well as other compression methods.
翻译:深度神经网络的高性能往往归功于计算能力的提高,使得复杂模型可以在大量标注数据上进行训练。然而,增加模型复杂度会导致现代神经网络的部署成本升高,而收集如此大量的数据需要巨大的成本来避免标签噪声。本文研究压缩方法在同时解决这两个问题方面的能力。我们提出,通过限制神经网络的表达能力,量化感知训练可以作为正则化来抑制标签噪声的过拟合问题。因此,在推理时,它可以帮助压缩模型。我们首先在手动引入的标签噪声的控制测试中验证了这一观点。此外,我们还在面部动作单元检测任务上测试了所提出的方法,由于任务的细微差异,标签通常存在噪声。在所有情况下,我们的结果表明,与现有的基线、正则化以及其他压缩方法相比,量化都可以显著提高结果。