Advances in computer vision are pushing the limits of im-age manipulation, with generative models sampling detailed images on various tasks. However, a specialized model is often developed and trained for each specific task, even though many image edition tasks share similarities. In denoising, inpainting, or image compositing, one always aims at generating a realistic image from a low-quality one. In this paper, we aim at making a step towards a unified approach for image editing. To do so, we propose EdiBERT, a bi-directional transformer trained in the discrete latent space built by a vector-quantized auto-encoder. We argue that such a bidirectional model is suited for image manipulation since any patch can be re-sampled conditionally to the whole image. Using this unique and straightforward training objective, we show that the resulting model matches state-of-the-art performances on a wide variety of tasks: image denoising, image completion, and image composition.
翻译:计算机视觉的进步正在推向非年龄操纵的极限,通过基因模型对各种任务的详细图像进行取样。然而,尽管许多图像版本的任务有相似之处,但经常为每项具体任务开发和培训专门模型,尽管许多图像版本的任务有相似之处。在拆卸、油漆或图像合成方面,人们总是力求从低质量的图像中产生现实的图像。在本文中,我们的目标是朝着统一图像编辑方法迈出一步。为此,我们提议EdiBERT,这是在矢量定量自动编码器建造的离散潜空中受过训练的双向变压器。我们认为,这种双向模型适合图像操作,因为任何补丁都可以以整个图像为条件重新标注。我们利用这一独特而直接的培训目标显示,所产生的模型与各种任务(图像脱色、图像完成和图像构成)的状态和艺术表现相匹配。