Users interact with text, image, code, or other editors on a daily basis. However, machine learning models are rarely trained in the settings that reflect the interactivity between users and their editor. This is understandable as training AI models with real users is not only slow and costly, but what these models learn may be specific to user interface design choices. Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help. We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users, by using user simulators that provide edits that guide the model towards a given target text. We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts, even when all models are given the same budget of user inputs or edits.
翻译:用户在日常使用中会与文本、图像、代码或其他编辑器进行交互。然而,机器学习模型很少在反映用户与他们的编辑器之间交互的设置中进行训练。这是可以理解的,因为使用实际用户训练AI模型不仅费时费力,而且这些模型所学习的内容可能是特定于用户界面设计选择的。不幸的是,这意味着大部分关于文本、代码和图像生成的研究都集中在非交互式的设置中,即模型需要在没有考虑用户输入的情况下将一切都做对。我们引入了一项新的《交互式文本生成》任务,允许使用用户模拟器,以指导模型朝向给定的目标文本进行交互式模型训练,而不需要涉及真实用户所需的成本。我们使用模仿学习训练我们的交互式模型,实验表明,与竞争对手的非交互式生成模型相比,训练交互式模型更优秀,即使所有模型被赋予相同的用户输入或编辑预算。