Recent research in adversarially robust classifiers suggests their representations tend to be aligned with human perception, which makes them attractive for image synthesis and restoration applications. Despite favorable empirical results on a few downstream tasks, their advantages are limited to slow and sensitive optimization-based techniques. Moreover, their use on generative models remains unexplored. This work proposes the use of robust representations as a perceptual primitive for feature inversion models, and show its benefits with respect to standard non-robust image features. We empirically show that adopting robust representations as an image prior significantly improves the reconstruction accuracy of CNN-based feature inversion models. Furthermore, it allows reconstructing images at multiple scales out-of-the-box. Following these findings, we propose an encoding-decoding network based on robust representations and show its advantages for applications such as anomaly detection, style transfer and image denoising.
翻译:最近对敌对强势分类器的研究显示,它们的表现往往与人类的认知一致,因而对图像合成和恢复应用具有吸引力。尽管在几个下游任务上取得了有利的实证结果,但它们的优势仅限于缓慢和敏感的优化技术。此外,它们对于基因模型的利用仍未得到探索。这项工作提议使用强势表达作为特征转换模型的原始概念,并展示其对于标准非紫外图像特征的好处。我们从经验上表明,在使用强势表达作为图像之前大大改善了CNN特征转换模型的重建准确性。此外,它允许在多个尺度上重建图像。根据这些发现,我们提议以强势表达为基础建立一个编码解码网络,并展示其在异常检测、风格传输和图像解译等应用方面的优势。