Scaling semantic parsing models for task-oriented dialog systems to new languages is often expensive and time-consuming due to the lack of available datasets. Available datasets suffer from several shortcomings: a) they contain few languages b) they contain small amounts of labeled examples per language c) they are based on the simple intent and slot detection paradigm for non-compositional queries. In this paper, we present a new multilingual dataset, called MTOP, comprising of 100k annotated utterances in 6 languages across 11 domains. We use this dataset and other publicly available datasets to conduct a comprehensive benchmarking study on using various state-of-the-art multilingual pre-trained models for task-oriented semantic parsing. We achieve an average improvement of +6.3 points on Slot F1 for the two existing multilingual datasets, over best results reported in their experiments. Furthermore, we demonstrate strong zero-shot performance using pre-trained models combined with automatic translation and alignment, and a proposed distant supervision method to reduce the noise in slot label projection.
翻译:由于缺乏可用的数据集,现有数据集存在若干缺点:(a) 数据集包含少数语文;(b) 含有少量每种语文的标签示例;(c) 其依据是非组合查询的简单意图和位置检测模式;在本文件中,我们提出了一个新的多语种数据集,称为MTOP,由11个域的6种语文100k附加说明的语句组成。我们利用这一数据集和其他公开提供的数据集,对使用各种最先进的多语言预先培训模型进行任务导向语义分析进行全面的基准研究。我们平均改进了Slot F1上现有的两个多语种数据集的+6.3点,超过了试验中报告的最佳结果。此外,我们用预先培训的模型加上自动翻译和校准,以及拟议的远程监督方法,展示了强的零点性性性表现,以降低时间标签投放的噪音。