We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
翻译:我们提出了从排名(RankIQA)中学习的不参考图像质量评估(NR-IQA)方法(NR-IQA ) 。 为了解决IQA数据量有限问题,我们培训了暹罗网络,通过使用合成生成的扭曲(相对图像质量为人所知)来根据图像质量进行排序。这些排名的图像集可以自动生成,而无需劳累的人类标签。然后我们通过微调将受过训练的Siamese网络中的知识传输给传统的CNN,从单个图像中估算绝对图像质量。我们展示了如何通过一个单一网络前向传播一批图像和从整批图像中产生的所有一对图像反向回映梯度来大大提高我们的方法的效率。 TID2013 基准实验显示,我们可以在超过5%的人类标点上,我们将我们的方法优于现有的NR-IQA技术,我们甚至超越了完全参照 IQA (FR-IQA) 高品质图像的先进方法。