Fusion-based quality assessment has emerged as a powerful method for developing high-performance quality models from quality models that individually achieve lower performances. A prominent example of such an algorithm is VMAF, which has been widely adopted as an industry standard for video quality prediction along with SSIM. In addition to advancing the state-of-the-art, it is imperative to alleviate the computational burden presented by the use of a heterogeneous set of quality models. In this paper, we unify "atom" quality models by computing them on a common transform domain that accounts for the Human Visual System, and we propose FUNQUE, a quality model that fuses unified quality evaluators. We demonstrate that in comparison to the state-of-the-art, FUNQUE offers significant improvements in both correlation against subjective scores and efficiency, due to computation sharing.
翻译:以融合为基础的质量评估已成为从质量模型中开发高性能质量模型的有力方法,这些模型的个别性能较低,这种算法的一个突出例子是VMAF, 与SSIM一起被广泛采用,作为视频质量预测的行业标准。除了推进最新技术外,还必须减轻使用多种质量模型带来的计算负担。在本文中,我们通过将“原子”质量模型计算在一个共同的变换域上,计算出人类视觉系统,我们提出了FUNQUE,这是一个融合统一质量评估员的质量模型。我们证明,与最新技术相比,FUNQUE在与主观分数和效率的关系上,由于计算共享,两者都有很大改进。