Recent text-to-image diffusion models have significantly improved visual quality and text alignment. However, generating a sequence of images while preserving consistent character identity across diverse scene descriptions remains a challenging task. Existing methods often struggle with a trade-off between maintaining identity consistency and ensuring per-image prompt alignment. In this paper, we introduce a novel framework, ASemconsist, that addresses this challenge through selective text embedding modification, enabling explicit semantic control over character identity without sacrificing prompt alignment. Furthermore, based on our analysis of padding embeddings in FLUX, we propose a semantic control strategy that repurposes padding embeddings as semantic containers. Additionally, we introduce an adaptive feature-sharing strategy that automatically evaluates textual ambiguity and applies constraints only to the ambiguous identity prompt. Finally, we propose a unified evaluation protocol, the Consistency Quality Score (CQS), which integrates identity preservation and per-image text alignment into a single comprehensive metric, explicitly capturing performance imbalances between the two metrics. Our framework achieves state-of-the-art performance, effectively overcoming prior trade-offs. Project page: https://minjung-s.github.io/asemconsist
翻译:近期文本到图像扩散模型在视觉质量和文本对齐方面取得了显著进步。然而,在多样化的场景描述下生成一系列图像,同时保持角色身份的一致性,仍然是一项具有挑战性的任务。现有方法往往难以在维持身份一致性和确保单张图像提示对齐之间取得平衡。本文提出了一种新颖的框架ASemConsist,该框架通过选择性文本嵌入修改来解决这一挑战,从而在不牺牲提示对齐的前提下,实现对角色身份的显式语义控制。此外,基于我们对FLUX模型中填充嵌入的分析,我们提出了一种语义控制策略,将填充嵌入重新用作语义容器。我们还引入了一种自适应特征共享策略,该策略能自动评估文本歧义,并仅对存在歧义的身份提示施加约束。最后,我们提出了一个统一的评估协议——一致性质量分数(CQS),该协议将身份保持和单张图像文本对齐整合为一个综合指标,明确捕捉了这两个指标之间的性能失衡。我们的框架实现了最先进的性能,有效克服了先前的权衡问题。项目页面:https://minjung-s.github.io/asemconsist