In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations. We design a transfer learning approach for dialogue act recognition and validate it on two different target languages and domains. We compute dialogue turn embeddings with both a CNN and multi-head self-attention model and show that the best results are obtained by combining all sources of transferred information. We further demonstrate that the proposed methods significantly outperform related cross-lingual DA recognition approaches.
翻译:在本文中,我们利用跨语言模式,使对话行为承认具有少量说明的具体任务。我们设计了对话行为承认的转让学习方法,并在两种不同的目标语言和领域加以验证。我们计算对话将嵌入有线电视新闻网和多头自我关注模式,并表明通过将所有传递信息的来源结合起来,取得最佳结果。我们进一步表明,拟议方法大大优于相关跨语言DA识别方法。