In this paper, we present CLCC, a novel contrastive learning framework for color constancy. Contrastive learning has been applied for learning high-quality visual representations for image classification. One key aspect to yield useful representations for image classification is to design illuminant invariant augmentations. However, the illuminant invariant assumption conflicts with the nature of the color constancy task, which aims to estimate the illuminant given a raw image. Therefore, we construct effective contrastive pairs for learning better illuminant-dependent features via a novel raw-domain color augmentation. On the NUS-8 dataset, our method provides $17.5\%$ relative improvements over a strong baseline, reaching state-of-the-art performance without increasing model complexity. Furthermore, our method achieves competitive performance on the Gehler dataset with $3\times$ fewer parameters compared to top-ranking deep learning methods. More importantly, we show that our model is more robust to different scenes under close proximity of illuminants, significantly reducing $28.7\%$ worst-case error in data-sparse regions.
翻译:在本文中,我们展示了CLCC, 是一个全新的色彩凝固的对比学习框架。 在图像分类中, 学习高品质视觉表现时应用了对比学习。 在图像分类中, 产生有用表达方式的一个关键方面是设计发光的变异增强值。 但是, 发光的不易变换假设与颜色凝固任务的性质发生冲突, 其目的在于估计光素的原始图像。 因此, 我们构建了有效的对比配对, 以便通过新颖的原始面色增强来学习更好光素依赖的特征。 在NUS-8数据集中, 我们的方法比强的基线提供了175美元相对的改进, 达到最先进的性能, 而没有增加模型复杂性。 此外, 我们的方法在格勒数据集上取得了竞争性的性能, 其参数比高层深层学习方法少了3美元。 更重要的是, 我们显示, 我们的模型在光素的近距离下不同场景色, 大大降低了28.7 $ 美元的数据采集区最差的差错 。