Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times. To apply such models to a real-world scenario, some existing work uses predicted answers, instead of unavailable ground-truth answers, as the conversation history for inference. However, since these models usually predict wrong answers, using all the predictions without filtering significantly hampers the model performance. To address this problem, we propose to filter out inaccurate answers in the conversation history based on their estimated confidences and uncertainties from the ConvQA model, without making any architectural changes. Moreover, to make the confidence and uncertainty values more reliable, we propose to further calibrate them, thereby smoothing the model predictions. We validate our models, Answer Selection-based realistic Conversation Question Answering, on two standard ConvQA datasets, and the results show that our models significantly outperform relevant baselines. Code is available at: https://github.com/starsuzi/AS-ConvQA.
翻译:解答问题(Convational Question)模式旨在回答一个问题,用它的相关段落和之前的问答对等回答在对话中多次出现的问题。为了将这种模型应用于现实世界情景,有些现有工作使用预测答案,而不是没有地面真相的答案,作为对话历史的推理。然而,由于这些模型通常预测错误答案,使用所有预测而不过滤,大大妨碍模型的性能。为了解决这一问题,我们提议根据对ConvQA模式的信任和不确定性的估计,在对话史中过滤不准确的答案,不作任何建筑上的改动。此外,为了使信心和不确定性值更加可靠,我们提议进一步校准这些答案,从而平滑模型预测。我们用两个标准ConvQA数据集验证我们的模型,回答基于选择的现实对质问题的答复,结果显示我们的模型大大超出相关基线。代码见:https://github.com/starsuzi/AS-ConvQA。