Blind image quality assessment (BIQA) aims to automatically evaluate the perceived quality of a single image, whose performance has been improved by deep learning-based methods in recent years. However, the paucity of labeled data somewhat restrains deep learning-based BIQA methods from unleashing their full potential. In this paper, we propose to solve the problem by a pretext task customized for BIQA in a self-supervised learning manner, which enables learning representations from orders of magnitude more data. To constrain the learning process, we propose a quality-aware contrastive loss based on a simple assumption: the quality of patches from a distorted image should be similar, but vary from patches from the same image with different degradations and patches from different images. Further, we improve the existing degradation process and form a degradation space with the size of roughly $2\times10^7$. After pre-trained on ImageNet using our method, models are more sensitive to image quality and perform significantly better on downstream BIQA tasks. Experimental results show that our method obtains remarkable improvements on popular BIQA datasets.
翻译:盲人图像质量评估(BIQA)旨在自动评估单一图像的感知质量,其性能近年来通过深层次的学习方法得到了改进;然而,标签数据缺乏,在某种程度上限制了深层次的学习基础BIQA方法充分发挥潜力;在本文件中,我们提议以自我监督的学习方式为BIQA定制的借口任务来解决问题,该借口任务能够从数量级的数据中学习更多的数据;为了限制学习过程,我们提议在简单假设的基础上进行质量认知对比性损失:来自扭曲图像的补丁的质量应当相似,但与同一图像的补丁不同,有不同的降解和补丁;此外,我们改进现有的降解过程,形成一个退化空间,其大小约为2美元,10美分。在用我们的方法对图像网络进行预先培训后,模型对图像质量更加敏感,对下游BIQA任务的表现要好得多。实验结果显示,我们的方法在流行的 BIQA数据集上取得了显著的改进。</s>