Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
翻译:最近深层学习研究的进展,例如变压器,增强了自动化代理商产生与人所写的版本相似的创造性文本的能力。 默认情况下, 变压器解码器只能生成与先前生成的文本相关的新文本。 任何位置的候选人标牌的输出分布都以先前选择的标牌为条件, 使用一个自我注意机制来模仿自动回归特性。 这对于可控故事生成等任务来说具有内在限制, 因而在撰写故事时可能有必要以未来阴谋事件为条件。 在这项工作中, 我们提出未来视觉, 一种对未来调控变形器进行微调的方法。 变压器解调器通常会事先训练完成上下文的任务, 一次以自我注意的方式完成。 未来视觉使解码器能够关注一个编码的未来绘图事件。 这促使变形器以与提供的未来逻辑结论的方式扩展上的背景。 推断, 未来的绘图事件可以由一位人类作者编写, 以某种方向指导我们所生成的代言的效果。