Learning-based image quality assessment (IQA) has made remarkable progress in the past decade, but nearly all consider the two key components - model and data - in relative isolation. Specifically, model-centric IQA focuses on developing "better" objective quality methods on fixed and extensively reused datasets, with a great danger of overfitting. Data-centric IQA involves conducting psychophysical experiments to construct "better" human-annotated datasets, which unfortunately ignores current IQA models during dataset creation. In this paper, we first design a series of experiments to probe computationally that such isolation of model and data impedes further progress of IQA. We then describe a computational framework that integrates model-centric and data-centric IQA. As a specific example, we design computational modules to quantify the sampling-worthiness of candidate images based on blind IQA (BIQA) model predictions and deep content-aware features. Experimental results show that the proposed sampling-worthiness module successfully spots diverse failures of the examined BIQA models, which are indeed worthy samples to be included in next-generation datasets.
翻译:过去十年来,基于学习的图像质量评估(IQA)取得了显著进展,但几乎所有人都在相对孤立的情况下考虑了两个关键组成部分(模型和数据),具体地说,以模型为中心的IQA侧重于在固定和广泛再利用的数据集方面开发“更好的”客观质量方法,并极有可能过度配置。以数据为中心的IQA涉及进行心理物理实验,以构建“更好的”附加说明的人类数据集,不幸的是,这些数据集创建期间忽略了当前的IQA模型。在本文中,我们首先设计了一系列实验,以计算方式检测模型和数据的这种孤立性阻碍IQA的进一步发展。然后我们描述一个计算框架,将模型中心和以数据为中心的IQA纳入其中。作为一个具体的例子,我们设计了计算模块,以量化基于盲的 IQA(BIQA) 模型预测和深度内容觉察特征的候选图像的抽样价值。实验结果表明,拟议的取样标准模块成功地发现所审查的BIQA模型存在多种失败,这些模型确实值得列入下一代数据集。