Well-designed prompts can guide text-to-image models to generate amazing images. However, the performant prompts are often model-specific and misaligned with user input. Instead of laborious human engineering, we propose prompt adaptation, a general framework that automatically adapts original user input to model-preferred prompts. Specifically, we first perform supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then we use reinforcement learning to explore better prompts. We define a reward function that encourages the policy to generate more aesthetically pleasing images while preserving the original user intentions. Experimental results on Stable Diffusion show that our method outperforms manual prompt engineering in terms of both automatic metrics and human preference ratings. Moreover, reinforcement learning further boosts performance, especially on out-of-domain prompts. The pretrained checkpoints are available at https://aka.ms/promptist. The demo can be found at https://aka.ms/promptist-demo.
翻译:设计完善的提示可以引导文本到图像模型生成惊人的图像。 然而, 表演的提示通常都是模型化的, 并且与用户输入不相符 。 我们建议快速调整, 而不是费力的人类工程, 即一个将原始用户输入自动调整到模型首选提示的总框架。 具体地说, 我们首先对少量手工设计提示进行预先培训的语言模型的微调。 然后我们用强化学习来探索更好的提示。 我们定义了一个奖励功能, 鼓励该政策在保存原始用户意图的同时生成更美观的图像。 稳定传播的实验结果显示, 我们的方法在自动计量和人类偏好评级方面都超越了人工快速工程。 此外, 强化学习能进一步提升业绩, 特别是外向外的提示。 预先培训的检查可在 https:// aka. ms/ promptist 上查阅。 演示可以在 https:// aka. ms/ promptistem- demo 上找到。