No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference. Codes and models are available at https://github.com/IIGROUP/MANIQA.
翻译:匿名图像质量评估(NR-IQA)旨在根据人类主观感知评估图像的感知质量。 不幸的是,现有的NR-IQA方法远未满足预测基于GAN的扭曲图像准确质量评分的需要。 为此,我们建议多分关注网络(MANIQA)改进基于GAN的图像质量评估(MANIQA)的性能。我们首先通过ViT提取特征,然后加强全球和地方互动,我们建议转录关注区(TAB)和Scale Swin变换区(SSTB)。这两个模块在频道和空间层面分别应用了关注机制。以这种多维方式,模块合作增加了全球和地方不同区域图像的相互作用。最后,根据每个补丁评分的重量,使用一个双分质量预测的分支结构来预测最后得分。 实验结果显示,MANIQA超越了四个标准质量模型的状态-艺术方法(LIVIA II 测试第20级/CAVIA级)。