Following the major successes of self-attention and Transformers for image analysis, we investigate the use of such attention mechanisms in the context of Image Quality Assessment (IQA) and propose a novel full-reference IQA method, Vision Transformer for Attention Modulated Image Quality (VTAMIQ). Our method achieves competitive or state-of-the-art performance on the existing IQA datasets and significantly outperforms previous metrics in cross-database evaluations. Most patch-wise IQA methods treat each patch independently; this partially discards global information and limits the ability to model long-distance interactions. We avoid this problem altogether by employing a transformer to encode a sequence of patches as a single global representation, which by design considers interdependencies between patches. We rely on various attention mechanisms -- first with self-attention within the Transformer, and second with channel attention within our difference modulation network -- specifically to reveal and enhance the more salient features throughout our architecture. With large-scale pre-training for both classification and IQA tasks, VTAMIQ generalizes well to unseen sets of images and distortions, further demonstrating the strength of transformer-based networks for vision modelling.
翻译:继自我注意和变换者在图像分析方面取得重大成功之后,我们调查了在图像质量评估(IQA)背景下使用这类关注机制的情况,并提出了一种新的全面参考 IQA 方法,即《关注感光变换图像质量(VTAMIQ) 》 。我们的方法在现有的IQA数据集上实现了竞争性或最先进的表现,大大优于跨数据库评价的以往衡量标准。大多数不完全的IQA方法独立地对待每个补丁;这部分抛弃了全球信息,限制了模拟长距离互动的能力。我们完全避免了这一问题,我们使用一个变异器将一系列补丁编码成单一的全球代表,通过设计来考虑补丁之间的相互依存性。我们依靠各种注意机制 -- -- 首先在变异调中进行自我注意,其次是在差异调控网内进行专注 -- -- 具体显示和加强我们整个结构中更为突出的特征。在进一步进行分类和IQA任务培训之前,VTAMIQG一般地将基于图像的变形能力网络展示为各种视像模型。