Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an ``ensemble'' of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at https://github.com/dennlinger/TSAR-2022-Shared-Task
翻译:以前最先进的简化法律模式包括若干组成部分的复杂管道,其中每个组成部分都需要深入的技术知识和微调互动,才能充分发挥潜力。作为替代办法,我们描述了一种令人沮丧的简单管道,其依据是:GPT-3的激励反应,在少数培训实例的情况下,以大范围的方式,在各种环境下打斗竞争,我们向TSAR-2022英语轨道提交的最出色的报告包括“共同”六个不同的快速模板,其背景程度各异。作为一个后爆的结果,我们进一步详细介绍了一种语言转换技术,可以简化英语以外的语言。我们应用到西班牙语和葡萄牙语子集,我们取得最先进的结果,只对最初的提示稍作修改。除了详细介绍实施和设置外,我们还要将余下的工作用于讨论迅速的特点和对未来工作的影响。实验守则可在https://github.com/dennlinger/TSAR-2022-Shared-Task上查阅。