The goal in a blind image quality assessment (BIQA) model is to simulate the process of evaluating images by human eyes and accurately assess the quality of the image. Although many approaches effectively identify degradation, they do not fully consider the semantic content in images resulting in distortion. In order to fill this gap, we propose a deep adaptive superpixel-based network, namely DSN-IQA, to assess the quality of image based on multi-scale and superpixel segmentation. The DSN-IQA can adaptively accept arbitrary scale images as input images, making the assessment process similar to human perception. The network uses two models to extract multi-scale semantic features and generate a superpixel adjacency map. These two elements are united together via feature fusion to accurately predict image quality. Experimental results on different benchmark databases demonstrate that our algorithm is highly competitive with other approaches when assessing challenging authentic image databases. Also, due to adaptive deep superpixel-based network, our model accurately assesses images with complicated distortion, much like the human eye.
翻译:盲人图像质量评估模型(BIQA)的目标是模拟通过人的眼睛对图像进行评估的过程,并准确评估图像的质量。虽然许多方法有效识别降解,但并未充分考虑图像中导致扭曲的语义内容。为了填补这一空白,我们提议建立一个深度适应性超像素网络,即DSN-IQA,以基于多尺度和超像素分割法评估图像的质量。DSN-IQA可以适应性地接受任意比例图像作为输入图像,使评估过程与人类感知相似。网络使用两种模型来提取多尺度的语义特征并生成超像素相邻图。这两个元素通过特征聚合而结合在一起,以准确预测图像质量。不同基准数据库的实验结果显示,在评估具有挑战性的真实图像数据库时,我们的算法与其他方法具有高度竞争力。此外,由于适应性深度超像素基网络,我们的模型精确评估图像时采用了复杂的扭曲,与人类眼睛非常相似。