Neural reasoning accuracy improves when generating intermediate reasoning steps. However, the source of this improvement is yet unclear. Here, we investigate and factorize the benefit of generating intermediate steps for symbolic reasoning. Specifically, we decompose the reasoning strategy w.r.t. step granularity and chaining strategy. With a purely symbolic numerical reasoning dataset (e.g., A=1, B=3, C=A+3, C?), we found that the choice of reasoning strategies significantly affects the performance, with the gap becoming even larger as the extrapolation length becomes longer. Surprisingly, we also found that certain configurations lead to nearly perfect performance, even in the case of length extrapolation. Our results indicate the importance of further exploring effective strategies for neural reasoning models.
翻译:产生中间推理步骤时,神经推理的准确性会提高。 但是,这种改进的来源还不清楚。 在这里,我们调查并计入为象征性推理而提出中间步骤的好处。 具体地说,我们分解推理战略( w.r.t. 步进颗粒和链式战略) 。 有了纯粹象征性的数字推理数据集(例如,A=1, B=3, C=A+3, C?),我们发现,选择推理战略会大大影响业绩,随着外推长度的延长,差距会更大。 令人惊讶的是,我们还发现某些配置导致几乎完美的性能, 即使在长期外推法的情况下也是如此。 我们的结果表明进一步探索神经推理模型的有效战略的重要性。