Increasing popularity of high-dynamic-range (HDR) image and video content brings the need for metrics that could predict the severity of image impairments as seen on displays of different brightness levels and dynamic range. Such metrics should be trained and validated on a sufficiently large subjective image quality dataset to ensure robust performance. As the existing HDR quality datasets are limited in size, we created a Unified Photometric Image Quality dataset (UPIQ) with over 4,000 images by realigning and merging existing HDR and standard-dynamic-range (SDR) datasets. The realigned quality scores share the same unified quality scale across all datasets. Such realignment was achieved by collecting additional cross-dataset quality comparisons and re-scaling data with a psychometric scaling method. Images in the proposed dataset are represented in absolute photometric and colorimetric units, corresponding to light emitted from a display. We use the new dataset to retrain existing HDR metrics and show that the dataset is sufficiently large for training deep architectures. We show the utility of the dataset on brightness aware image compression.
翻译:高动态频谱图像和视频内容越来越受欢迎,因此需要能够预测不同亮度和动态范围的显示显示所显示的图像损伤严重程度的度量指标。这些度量指标应当通过足够大的主观图像质量数据集进行培训和验证,以确保稳健的性能。由于现有的《人类发展报告》质量数据集规模有限,我们通过调整和合并现有的《人类发展报告》和标准动态频谱数据集,创建了4 000多张图像的统一光度图像质量数据集(UPIQ)。调整后的质量评分在所有数据集中共享相同的质量尺度。这种调整是通过收集额外的交叉数据集质量比较和以心理测量缩放方法重新缩放数据来实现的。拟议数据集中的图像以绝对光度和色度单位表示,与从显示中释放的光度相对应。我们使用新数据集重新配置现有的《人类发展报告》指标,并显示数据集对于培训深层结构而言足够大。我们展示了数据集在了解光度图像压缩方面的实用性。