Obtaining the human-like perception ability of abstracting visual concepts from concrete pixels has always been a fundamental and important target in machine learning research fields such as disentangled representation learning and scene decomposition. Towards this goal, we propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept. Particularly, to obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens, preventing information leakage across concept tokens. We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts. The cross-attention and disentangling loss play the role of induction and mutual exclusion for the concept tokens, respectively. Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition. VCT achieves the state of the art results by a large margin.
翻译:从混凝土像素中提取视觉概念的人性感知能力一直是机器学习研究领域的基本和重要目标,例如分解代言学习和场面分解。为了实现这一目标,我们提议一个不受监督的变压器视觉概念化框架,称为VCT,将图像感知成一组分解的视觉概念符号,每个概念象征都对一种独立的视觉概念概念作出反应。特别是,为了获得这些概念符号,我们只使用交叉意图从图像符号层中提取视觉信息,在概念符号之间没有自我注意的层层中提取视觉信息,防止信息在概念符号之间渗漏。我们进一步提议一个概念分解损失的概念,以促进不同概念符号代表独立的视觉概念概念。交叉注意和分解丢失分别发挥感应和相互排斥概念符号的作用。在几个大众数据层上进行的广泛实验,验证了VCT在分解代言学习和场面解剖任务上的有效性。VCT通过巨大距离实现艺术成果状态。