This paper discusses the importance of uncovering uncertainty in end-to-end dialog tasks, and presents our experimental results on uncertainty classification on the Ubuntu Dialog Corpus. We show that, instead of retraining models for this specific purpose, the original retrieval model's underlying confidence concerning the best prediction can be captured with trivial additional computation.
翻译:本文讨论了发现端对端对话任务不确定性的重要性,并介绍了我们在Ubuntu Dialog Corpus的不确定性分类方面的实验结果。 我们表明,与为这一具体目的再培训模型相比,原始检索模型对于最佳预测的基本信心可以通过微小的额外计算得到。