Recent years have witnessed the great success of blind image quality assessment (BIQA) in various task-specific scenarios, which present invariable distortion types and evaluation criteria. However, due to the rigid structure and learning framework, they cannot apply to the cross-task BIQA scenario, where the distortion types and evaluation criteria keep changing in practical applications. This paper proposes a scalable incremental learning framework (SILF) that could sequentially conduct BIQA across multiple evaluation tasks with limited memory capacity. More specifically, we develop a dynamic parameter isolation strategy to sequentially update the task-specific parameter subsets, which are non-overlapped with each other. Each parameter subset is temporarily settled to Remember one evaluation preference toward its corresponding task, and the previously settled parameter subsets can be adaptively reused in the following BIQA to achieve better performance based on the task relevance. To suppress the unrestrained expansion of memory capacity in sequential tasks learning, we develop a scalable memory unit by gradually and selectively pruning unimportant neurons from previously settled parameter subsets, which enable us to Forget part of previous experiences and free the limited memory capacity for adapting to the emerging new tasks. Extensive experiments on eleven IQA datasets demonstrate that our proposed method significantly outperforms the other state-of-the-art methods in cross-task BIQA.
翻译:近年来,在各种任务特定情景中,盲人图像质量评估(BIQA)取得了巨大成功,这些评估呈现出不可变的扭曲类型和评价标准。然而,由于僵硬的结构和学习框架,这些评估无法适用于交叉任务 BIQA情景,扭曲类型和评价标准在实际应用方面不断发生变化。本文件提出一个可升级的增量学习框架(SILF),可以在记忆能力有限的多个评价任务中依次进行BIQA。更具体地说,我们制定了动态参数孤立战略,以便按顺序更新任务特定参数子集,这些子集相互互不重叠。每个参数子集暂时固定,以记住对相应任务的一项评价偏好,而先前确定的参数子集可以在BIQA下重新适应,以便根据任务的相关性实现更好的业绩。为了抑制在连续任务学习中不受限制地扩展记忆能力,我们开发了一个可缩放的记忆单位,从先前确定的参数子集中逐步和有选择地调整不重要的神经元,从而使我们能够忘记以前的经验,并释放了拟议的有限记忆能力,用于相应任务,从而在新的工作中大大调整了A-QA系统。