We present "Cross-Camera Convolutional Color Constancy" (C5), a learning-based method, trained on images from multiple cameras, that accurately estimates a scene's illuminant color from raw images captured by a new camera previously unseen during training. C5 is a hypernetwork-like extension of the convolutional color constancy (CCC) approach: C5 learns to generate the weights of a CCC model that is then evaluated on the input image, with the CCC weights dynamically adapted to different input content. Unlike prior cross-camera color constancy models, which are usually designed to be agnostic to the spectral properties of test-set images from unobserved cameras, C5 approaches this problem through the lens of transductive inference: additional unlabeled images are provided as input to the model at test time, which allows the model to calibrate itself to the spectral properties of the test-set camera during inference. C5 achieves state-of-the-art accuracy for cross-camera color constancy on several datasets, is fast to evaluate (~7 and ~90 ms per image on a GPU or CPU, respectively), and requires little memory (~2 MB), and thus is a practical solution to the problem of calibration-free automatic white balance for mobile photography.
翻译:我们展示了“C5 ” 一种学习方法,即“Cross-Camera Convolution Convention”(C5),这是一种学习方法,通过多摄像头的图像进行训练,精确地从以前在培训期间所见的新相机所摄的原始图像中估计出场景的发光色。C5 是一种超网络式的变色凝聚(CCC)方法的延伸:C5 学会产生CCC模型的重量,然后在输入图像上加以评价,CCC的重量动态地适应不同的输入内容。与以往的跨摄像头颜色凝聚模型不同,这些模型通常设计为对未经观察的相机所摄的测试集图像的光谱特性具有感知性,C5 将这一问题通过感光推断镜(C5 ) : 额外的未贴标签图像在测试时作为模型的输入,使得该模型能够校准自己与测试集相机的光谱特性的重量,在感测过程中,C5 和 CMPR 实现一些数据集的交叉摄像的状态的精确度,因此需要快速的 C- 和 C- PIS 和 C- m- sal- sal- sal 分别快速评估一个快速的 C-7 和 C- mal- sal- sal- sal- sal- sal- sal- sal- sal- sup- supal- sal- sup- sup- sup- sup- supal- suple- suple- suple- sup 要求 和 和 和 和 和 G- smal- smal- sy- sy- sy- smal- sal- sal- sal- sal- sal- sm- sal- sal- sal- sal- sal- sal- sal- sal- sal-