We propose PARASOL, a multi-modal synthesis model that enables disentangled, parametric control of the visual style of the image by jointly conditioning synthesis on both content and a fine-grained visual style embedding. We train a latent diffusion model (LDM) using specific losses for each modality and adapt the classifier-free guidance for encouraging disentangled control over independent content and style modalities at inference time. We leverage auxiliary semantic and style-based search to create training triplets for supervision of the LDM, ensuring complementarity of content and style cues. PARASOL shows promise for enabling nuanced control over visual style in diffusion models for image creation and stylization, as well as generative search where text-based search results may be adapted to more closely match user intent by interpolating both content and style descriptors.
翻译:我们提出 PARASOL,一种多模态合成模型,通过联合条件合成图像中的内容和细粒度的视觉样式嵌入,实现可分离的、参数化的视觉样式控制。我们使用特定的损失函数对每种模态进行潜在扩散模型 (LDM) 的训练,并在推断时调整无分类器引导,以鼓励独立的内容和样式模态的分离控制。我们利用辅助的语义和基于样式的搜索来创建训练三元组,以确保内容和样式提示的互补性。PARASOL 在扩散模型图像创建和样式化方面展示了潜力,以及生成搜索,其中基于文本的搜索结果可以通过插值内容和样式描述符来更紧密地匹配用户意图。