Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain. However, annotating data for each domain is both financially costly and non-scalable so we should fully utilize information across all domains. One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains. We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters to improve knowledge learning and transfer. Experiments on 5 domains show that our model is more effective for multi-domain SLU and obtain the best results. In addition, we show its transferability by outperforming the prior best model by 12.4\% when adapting to a new domain with little data.
翻译:口语理解作为一个有监督的学习问题得到了处理,每个领域都有一套培训数据。然而,每个领域的说明性数据在财务上成本高昂,而且无法扩展,因此我们应充分利用所有领域的信息。一种现有办法通过开展多域学习,利用共同参数进行跨领域的联合培训,来解决问题。我们提议通过使用特定领域和具体任务的模式参数来改进这一方法的参数化,以改进知识学习和转让。对5个领域的实验表明,我们的模型对于多域 SLU更有效,并且取得最佳结果。此外,我们通过在适应数据少的新领域时,以12.4 ⁇ 以优于先前的最佳模式的方式显示其可转让性。