Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. The prior work mainly focused on exploiting the duality in model training in order to obtain the models with better performance. However, regarding the fast-growing scale of models in the current NLP area, sometimes we may have difficulty retraining whole NLU and NLG models. To better address the issue, this paper proposes to leverage the duality in the inference stage without the need of retraining. The experiments on three benchmark datasets demonstrate the effectiveness of the proposed method in both NLU and NLG, providing the great potential of practical usage.
翻译:自然语言理解(NLU)和自然语言生成(NLG)任务具有很强的双重关系,即自然语言理解(NLU)和自然语言生成(NLG)任务旨在预测基于自然语言语句的语义标签和自然语言生成(NLG)任务正好相反,先前的工作主要侧重于利用模型培训的双重性,以获得更好的性能模型;然而,关于当前自然语言理解(NLU)和自然语言生成(NLG)领域快速增长的模型规模,有时我们可能难以再培训整个自然语言和自然语言生成(NLG)模型。为了更好地解决这一问题,本文件建议利用推论阶段的双重性,而无需再培训。关于三个基准数据集的实验显示了拟议方法在NLU和NLG的有效性,提供了巨大的实际使用潜力。