Annotated data has become the most important bottleneck in training accurate machine learning models, especially for areas that require domain expertise. A recent approach to deal with the above issue proposes using natural language explanations instead of labeling individual data points, thereby increasing human annotators' efficiency as well as decreasing costs substantially. This paper focuses on the task of turning these natural language descriptions into Python labeling functions by following a novel approach to semantic parsing with pre-trained text-to-text Transformers. In a series of experiments our approach achieves a new state of the art on the semantic parsing benchmark CoNaLa, surpassing the previous best approach by 3.7 BLEU points. Furthermore, on a manually constructed dataset of natural language descriptions-labeling functions pairs we achieve a BLEU of 0.39. Our approach can be regarded as a stepping stone towards models that are taught how to label in natural language, instead of being provided specific labeled samples. Our code, constructed dataset and models are available at https://github.com/ypapanik/t5-for-code-generation.
翻译:附加说明的数据已成为培训准确的机器学习模型中最重要的瓶颈,特别是在需要领域专门知识的领域。最近处理上述问题的方法建议使用自然语言解释而不是给个别数据点贴标签,从而提高人类记事员的效率并大幅降低成本。本文件侧重于将这些自然语言描述转换成Python标签功能的任务,采用新颖的方法,与经过预先训练的文本到文本变换器进行语义分析,而不是提供特定的标签样本。在一系列实验中,我们的方法在语义解析基准 CoNaLa上取得了新的艺术状态,超过了3.7 BLEU点先前的最佳方法。此外,在人工构造的自然语言描述-标签功能数据集上,我们实现了0.39的BLEU。我们的方法可被视为向学习如何用自然语言进行标签而不是提供特定标签样本的跳板模式前进。我们的代码、构建的数据元和模型可在https://github.com/mapanik/t5-for-codedededededeation上查阅。