The recent large-scale generative modeling has attained unprecedented performance especially in producing high-fidelity images driven by text prompts. Text inversion (TI), alongside the text-to-image model backbones, is proposed as an effective technique in personalizing the generation when the prompts contain user-defined, unseen or long-tail concept tokens. Despite that, we find and show that the deployment of TI remains full of "dark-magics" -- to name a few, the harsh requirement of additional datasets, arduous human efforts in the loop and lack of robustness. In this work, we propose a much-enhanced version of TI, dubbed Controllable Textual Inversion (COTI), in resolving all the aforementioned problems and in turn delivering a robust, data-efficient and easy-to-use framework. The core to COTI is a theoretically-guided loss objective instantiated with a comprehensive and novel weighted scoring mechanism, encapsulated by an active-learning paradigm. The extensive results show that COTI significantly outperforms the prior TI-related approaches with a 26.05 decrease in the FID score and a 23.00% boost in the R-precision.
翻译:近期的大规模生成模型在通过文本提示生成高保真图像方面取得了前所未有的性能。文本反转(TI)被提出作为一种有效的技术,可在提示包含自定义、未见过或长尾概念令牌时对生成进行个性化。尽管如此,我们发现并证明了 TI 部署仍然充满了“黑科技”--比如,需要额外的数据集、费力的人工成本和缺乏鲁棒性。在这项工作中,我们提出了一种极大增强版 TI,称为可控制的文本反转(COTI),以解决所有上述问题,并在提供一个坚固、数据效率和易于使用的框架。COTI 的核心是基于理论指导的损失目标,通过全面而新颖的加权评分机制来实现,由一种主动学习范式所封装。广泛的结果表明,COTI 在FID得分方面比以前的TI相关方法显著提高了26.05,并提升了23.00%的 R-precision。