A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images. But existing techniques for identifying these transformations rely on either a fixed vocabulary of pre-specified visual concepts, or on unsupervised disentanglement techniques whose alignment with human judgments about perceptual salience is unknown. This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space. Our approach is built from three components: (1) automatic identification of perceptually salient directions based on their layer selectivity; (2) human annotation of these directions with free-form, compositional natural language descriptions; and (3) decomposition of these annotations into a visual concept vocabulary, consisting of distilled directions labeled with single words. Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers, and enabling fine-grained manipulation of image style and content.
翻译:最近一大批大量的工作已经查明了基因对抗网络(GANs)潜在空间的转变,这些变化持续和可解释地转化了生成的图像。但是,现有鉴别这些转变的技术依赖于固定的预设视觉概念词汇,或未经监督的分解技术,这些技术与人类对感官显著性的判断并不为人所知。本文介绍了一种新方法,用于建立在GAN潜在空间中体现的原始视觉概念的开放式词汇。我们的方法由三个组成部分组成:(1) 根据层选择性自动识别概念突出的方向;(2) 人类用自由形式、构成的自然语言描述对这些方向进行说明;(3) 将这些说明分解成视觉概念词汇,包括用单词标注的精细的分解方向。实验表明,用我们的方法学习的概念是可靠和可拼的,在各种类别、背景和观察者之间加以概括,并有利于对图像风格和内容进行精细的操纵。