In this work, we investigate the problems of semantic parsing in a few-shot learning setting. In this setting, we are provided with utterance-logical form pairs per new predicate. The state-of-the-art neural semantic parsers achieve less than 25% accuracy on benchmark datasets when k= 1. To tackle this problem, we proposed to i) apply a designated meta-learning method to train the model; ii) regularize attention scores with alignment statistics; iii) apply a smoothing technique in pre-training. As a result, our method consistently outperforms all the baselines in both one and two-shot settings.
翻译:在这项工作中,我们调查了在几小片学习环境中语义分解的问题。 在这种环境中,我们得到了每个新矿床的语义表配对。当 k=1. 解决这个问题时,我们建议一) 采用指定的元学习方法来培训模型;二) 将关注分数与校准统计标准化;三) 在培训前采用平滑技术。结果,我们的方法始终超越了一和两片环境的所有基线。