Current approaches for fixing systematic problems in NLP models (e.g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural language. Taking inspiration from this, we explore natural language patches -- declarative statements that allow developers to provide corrective feedback at the right level of abstraction, either overriding the model (``if a review gives 2 stars, the sentiment is negative'') or providing additional information the model may lack (``if something is described as the bomb, then it is good''). We model the task of determining if a patch applies separately from the task of integrating patch information, and show that with a small amount of synthetic data, we can teach models to effectively use real patches on real data -- 1 to 7 patches improve accuracy by ~1-4 accuracy points on different slices of a sentiment analysis dataset, and F1 by 7 points on a relation extraction dataset. Finally, we show that finetuning on as many as 100 labeled examples may be needed to match the performance of a small set of language patches.
翻译:目前用于解决NLP模型系统问题的办法(例如,regex补丁,微调更多数据)要么是易碎的,要么是劳动密集型的,容易快捷。相比之下,人类往往通过自然语言相互校正。从中我们探索自然语言补丁 -- -- 声明性说明,使开发者能够在正确的抽象层面提供纠正反馈,要么超越模型( " 如果审查产生2颗恒星,情绪是负的" ),或者提供模型可能缺乏的额外信息( " 如果某物被描述为炸弹,那么它就会是好的" )。我们模拟确定一个补丁是否与整合补丁信息的任务分开,并显示如果使用少量合成数据,我们可以教授模型,在真实数据上有效地使用真实的补丁 -- -- 1至7个补丁,使感官分析数据集不同片段的精度提高精度,在关系提取数据集上提高精度,F1至7个点提高精度。最后,我们要对100个标签示例进行微调,以匹配小语言拼合的功能。