When humans solve complex problems, they typically create a sequence of ideas (involving an intuitive decision, reflection, error correction, etc.) in order to reach a conclusive decision. Contrary to this, today's models are mostly trained to map an input to one single and fixed output. In this paper, we investigate how we can give models the opportunity of a second, third and $k$-th thought. Taking inspiration from Hegel's dialectics, we propose the concept of a thought flow which creates a sequence of predictions. We present a self-correction mechanism that is trained to estimate the model's correctness and performs iterative prediction updates based on the correctness prediction's gradient. We introduce our method at the example of question answering and conduct extensive experiments that demonstrate (i) our method's ability to correct its own predictions and (ii) its potential to notably improve model performances. In addition, we conduct a qualitative analysis of thought flow correction patterns and explore how thought flow predictions affect human users within a crowdsourcing study. We find that (iii) thought flows enable improved user performance and are perceived as more natural, correct, and intelligent as single and/or top-3 predictions.
翻译:当人类解决复杂问题时,他们通常会创造一系列想法(涉及直觉决定、反射、错误纠正等),以便做出结论性决定。与此相反,今天的模型大多经过培训,可以绘制一个单一和固定产出的输入图。在本文中,我们调查如何给模型提供第二个、第三个和美元-第十个想法的机会。从Hegel的辩证学中,我们从Hegel的辩证学中汲取灵感,提出产生一系列预测的思维流概念。我们提出了一个自我校正机制,以根据正确性预测的梯度来评估模型的正确性并进行迭代预测更新。我们以问题回答和进行广泛实验为例介绍我们的方法,以显示(一)我们纠正自己的预测的能力和(二)其显著改进模型性能的潜力。此外,我们对思想流修正模式进行定性分析,并探索在群包研究中如何影响人类用户的思考流预测。我们发现(三)思想流有助于改进用户的性能,并被视为更自然、正确、3和智能的单一/最高预测。</s>