Despite remarkable recent advances, making object-centric learning work for complex natural scenes remains the main challenge. The recent success of adopting the transformer-based image generative model in object-centric learning suggests that having a highly expressive image generator is crucial for dealing with complex scenes. In this paper, inspired by this observation, we aim to answer the following question: can we benefit from the other pillar of modern deep generative models, i.e., the diffusion models, for object-centric learning and what are the pros and cons of such a model? To this end, we propose a new object-centric learning model, Latent Slot Diffusion (LSD). LSD can be seen from two perspectives. From the perspective of object-centric learning, it replaces the conventional slot decoders with a latent diffusion model conditioned on the object slots. Conversely, from the perspective of diffusion models, it is the first unsupervised compositional conditional diffusion model which, unlike traditional diffusion models, does not require supervised annotation such as a text description to learn to compose. In experiments on various object-centric tasks, including the FFHQ dataset for the first time in this line of research, we demonstrate that LSD significantly outperforms the state-of-the-art transformer-based decoder, particularly when the scene is more complex. We also show a superior quality in unsupervised compositional generation.
翻译:尽管最近取得了卓越的进展,但在复杂的自然场景中使基于物体中心的学习有效仍然是主要挑战。采用基于Transformer的图像生成模型在物体中心学习中的最近成功表明,具有高度表达性的图像生成器对于处理复杂场景至关重要。在本文中,受到这一观察的启发,我们的目标是回答以下问题:我们可以从现代深度生成模型的另一支柱——扩散模型中受益于物体中心学习吗?这样的模型的优缺点是什么?为此,我们提出了一种新的物体中心学习模型,潜在槽扩散(LSD)。 LSD可以从两个角度来看。从物体中心学习的角度来看,它用潜在扩散模型取代传统的槽解码器,其条件是物体槽。相反,从扩散模型的角度来看,它是第一个无监督的组合条件扩散模型,与传统的扩散模型不同,它不需要监督的注释,如文本描述来学习组合。在各种物体中心任务的实验中,包括首次在这一研究领域中使用FFHQ数据集,我们证明LSD在复杂场景下显著优于现有的基于Transformer的解码器。我们还展示了无监督组合生成的优越质量。