There currently exist two main approaches to reproducing visual appearance using Machine Learning (ML): The first is training models that generalize over different instances of a problem, e.g., different images of a dataset. As one-shot approaches, these offer fast inference, but often fall short in quality. The second approach does not train models that generalize across tasks, but rather over-fit a single instance of a problem, e.g., a flash image of a material. These methods offer high quality, but take long to train. We suggest to combine both techniques end-to-end using meta-learning: We over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop across many exemplars. To this end, we derive the required formalism that allows applying meta-learning to a wide range of visual appearance reproduction problems: textures, BRDFs, svBRDFs, illumination or the entire light transport of a scene. The effects of meta-learning parameters on several different aspects of visual appearance are analyzed in our framework, and specific guidance for different tasks is provided. Metappearance enables visual quality that is similar to over-fit approaches in only a fraction of their runtime while keeping the adaptivity of general models.
翻译:目前存在两种主要的方法,用机器学习(ML)来复制视觉外观:第一,培训模式,对不同的问题进行概括,例如,不同的数据集图像。一发式方法提供快速推断,但质量往往不高。第二套方法没有培训贯穿不同任务的模型,而是过于适合单一的问题实例,例如材料的闪光图像。这些方法提供高质量,但需要长时间培训。我们建议使用元学习将两种技术的端到端结合起来:我们过度适应一个内部环绕的单一问题实例,同时学习如何在多个外观外向环绕中如此高效。为此,我们得出必要的形式主义,允许将元学习应用于广泛的视觉复制问题:纹理、BRDFS、SvBRDFs、错误化或场景的整个光传输。元学习参数对视觉外观的若干不同方面的影响将在我们的框架中加以分析,同时学习如何在外向多个外向的外向环绕行模式中如此高效地操作。我们提出了在不同的视觉模式中进行类似性调整的具体方向。