As one promising way to inquire about any particular information through a dialog with the bot, question answering dialog systems have gained increasing research interests recently. Designing interactive QA systems has always been a challenging task in natural language processing and used as a benchmark to evaluate a machine's ability of natural language understanding. However, such systems often struggle when the question answering is carried out in multiple turns by the users to seek more information based on what they have already learned, thus, giving rise to another complicated form called Conversational Question Answering (CQA). CQA systems are often criticized for not understanding or utilizing the previous context of the conversation when answering the questions. To address the research gap, in this paper, we explore how to integrate conversational history into the neural machine comprehension system. On one hand, we introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system. On the other hand, we propose a history selection mechanism that selects the turns that are relevant and contributes the most to answer the current question. Experimentation results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board. We also conduct a number of experiments to show the side effects of using entire context information which brings unnecessary information and noise signals resulting in a decline in the model's performance.
翻译:作为通过与机器人的对话调查任何特定信息的一种有希望的方法,回答问题的对话系统最近引起了越来越多的研究兴趣。设计互动式质量评估系统在自然语言处理中始终是一项艰巨的任务,并且一直是评估机器自然语言理解能力的基准。然而,当用户通过多种途径回答问题时,这种系统往往挣扎不休,因为用户通过多种途径回答问题,根据他们已经学到的知识寻找更多的信息,从而产生另一个复杂形式,称为问答回答(CQA)。 CQA系统常常因在回答问题时不理解或使用先前的对话背景而受到批评。为了解决研究差距,我们在本文件中探讨如何将对话历史纳入神经机器理解系统。一方面,我们引入一个以公众可得到的事先培训的语言模式为基础的框架,即BERT将历史纳入系统。另一方面,我们提出了一个历史选择模式,选择相关的转变,并最有助于回答当前的问题。实验结果表明,我们的框架在业绩中与状态和状态-动态模型的运行效果相比,我们使用不必要动态模型展示了不必要动态模型。