Evaluating generative models, such as large language models (LLMs), commonly involves question-answering tasks where the final answer is selected based on probability of answer choices. On the other hand, for models requiring reasoning, the method of answer extraction plays a critical role. Our research reveals that the performance of reasoning models and their final answer distributions are highly sensitive to the answer extraction algorithm employed. In order to mitigate this, we propose a basic framework: Answer Regeneration. The method uses an additional model inference, providing the prior input and output prefaced by the prompt "Answer:". The final answer is then selected or extracted from the regenerated output. We show that this extraction-rule-agnostic approach exhibits improved performance and enhanced robustness. Furthermore, we have applied this framework to general math problems and open-ended question answering tasks. Our analysis and this framework could offer a more reliable results for model evaluation.
翻译:评估生成模型(如大型语言模型,LLMs)通常涉及问答任务,其中最终答案根据选项的概率进行选择。另一方面,对于需要推理的模型,答案提取方法起着关键作用。我们的研究表明,推理模型的性能及其最终答案分布对所采用的答案提取算法高度敏感。为缓解此问题,我们提出一个基础框架:答案再生。该方法利用额外的模型推理,提供以"答案:"为前缀的先前输入和输出,然后从再生输出中选择或提取最终答案。我们证明,这种与提取规则无关的方法展现出改进的性能和增强的鲁棒性。此外,我们已将该框架应用于一般数学问题和开放式问答任务。我们的分析及该框架可为模型评估提供更可靠的结果。