No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references. NR-IQA models are extensively studied in computational vision, and are widely used for performance evaluation and perceptual optimization of man-made vision systems. Here we make one of the first attempts to examine the perceptual robustness of NR-IQA models. Under a Lagrangian formulation, we identify insightful connections of the proposed perceptual attack to previous beautiful ideas in computer vision and machine learning. We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models (as approximations to human perception of just-noticeable differences). Through carefully designed psychophysical experiments, we find that all four NR-IQA models are vulnerable to the proposed perceptual attack. More interestingly, we observe that the generated counterexamples are not transferable, manifesting themselves as distinct design flows of respective NR-IQA methods.
翻译:无参照图像质量评估(NR-IQA)旨在量化人们如何看待数字图像的视觉扭曲,而没有查阅未经扭曲的参考文献。NR-IQA模型在计算愿景中进行了广泛研究,并被广泛用于人造图像系统的性能评估和感知优化。在这里,我们首先尝试审查NR-IQA模型的感知强度。在拉格朗格语的配方下,我们发现了所拟议的概念攻击与计算机视觉和机器学习中先前的美观思想的深刻联系。我们测试了四种全参考IQA模型中的一种知识驱动的和三种数据驱动的NR-IQA方法(作为人类对公正可察觉差异感知的近似值)。我们发现,通过精心设计的心理物理实验,所有四种NR-IQA模型都容易受到拟议的感知性攻击。更有趣的是,我们观察到产生的反特征并非可转让的,表明它们本身是各自RR-IQA方法的不同设计流。