Universal adversarial perturbation attacks are widely used to analyze image classifiers that employ convolutional neural networks. Nowadays, some attacks can deceive image- and video-quality metrics. So sustainability analysis of these metrics is important. Indeed, if an attack can confuse the metric, an attacker can easily increase quality scores. When developers of image- and video-algorithms can boost their scores through detached processing, algorithm comparisons are no longer fair. Inspired by the idea of universal adversarial perturbation for classifiers, we suggest a new method to attack differentiable no-reference quality metrics through universal perturbation. We applied this method to seven no-reference image- and video-quality metrics (PaQ-2-PiQ, Linearity, VSFA, MDTVSFA, KonCept512, Nima and SPAQ). For each one, we trained a universal perturbation that increases the respective scores. We also propose a method for assessing metric stability and identify the metrics that are the most vulnerable and the most resistant to our attack. The existence of successful universal perturbations appears to diminish the metric's ability to provide reliable scores. We therefore recommend our proposed method as an additional verification of metric reliability to complement traditional subjective tests and benchmarks.
翻译:通用对抗性扰动攻击被广泛用于分析使用神经神经网络的图像分类。 如今, 某些攻击可以欺骗图像和视频质量的计量标准。 因此, 这些计量标准的可持续性分析非常重要。 事实上, 如果攻击能够混淆计量标准, 攻击者可以很容易地提高质量分数。 当图像和视频分数的开发者可以通过分离处理提高分数时, 算法比较就不再公平了。 受对分类者进行普遍对抗性干扰概念的启发, 我们建议了一种新的方法, 通过全球扰动来攻击不同、 不可参照的质量指标。 我们将这种方法应用于七种不参照的图像和视频质量指标( PaQ-2- PiQ、 线性、 VSFA、 MDTVSFA、 KonCept 512、 Nima 和 SPQ ) 。 对于每一个指标, 我们训练了一种普遍扰动性扰动, 从而增加各自的分数。 我们还提出了一种评估指标稳定性的方法, 并确定了最易和最能抵抗我们攻击的衡量标准。 我们建议了成功的通用性标准检验方法的存在, 以降低标准的可靠性。