This paper concerns the control of text-guided generative models, where a user provides a natural language prompt and the model generates samples based on this input. Prompting is intuitive, general, and flexible. However, there are significant limitations: prompting can fail in surprising ways, and it is often unclear how to find a prompt that will elicit some desired target behavior. A core difficulty for developing methods to overcome these issues is that failures are know-it-when-you-see-it -- it's hard to fix bugs if you can't state precisely what the model should have done! In this paper, we introduce a formalization of "what the user intended" in terms of latent concepts implicit to the data generating process that the model was trained on. This formalization allows us to identify some fundamental limitations of prompting. We then use the formalism to develop concept algebra to overcome these limitations. Concept algebra is a way of directly manipulating the concepts expressed in the output through algebraic operations on a suitably defined representation of input prompts. We give examples using concept algebra to overcome limitations of prompting, including concept transfer through arithmetic, and concept nullification through projection. Code available at https://github.com/zihao12/concept-algebra.
翻译:本文涉及对文本指导的基因模型的控制, 即用户提供自然语言提示, 且该模型根据此输入生成样本。 提示是直观的、 一般性的和灵活的。 但是, 有许多限制: 提示会以令人惊讶的方式失败, 并且往往不清楚如何找到能够引起某些预期目标行为的快速。 制定方法克服这些问题的一个核心困难是, 失败是知道的 - 何时即时看- 失败 -- 如果您不能准确地说明该模型应该做什么的话, 则很难修复错误! 在本文中, 我们引入了“ 用户想要做什么” 的正规化, 即该模型所培训的生成过程隐含的潜在概念。 这种正式化让我们能够找出提示的一些基本限制。 然后我们用形式主义来开发概念代数来克服这些限制。 概念代数是直接通过测算操作来调节输出中表达的概念, 即通过正确定义的投入表达速度来直接调节。 我们用概念代数来举例说明“ 代数” 来克服概念的局限性, 包括通过算法/ 解释性概念/ 。