Recent work has explored batch prompting as a strategy to amortize inference cost in large language models (LLMs). In this paper, we show that batching offers an additional, underappreciated benefit: it regularizes model behavior during multi-step reasoning for Large Reasoning Models (LRMs). We conduct a comprehensive study across 13 diverse benchmarks and observe that batching improves accuracy while substantially reducing reasoning token usage, often by 3x-5x. Through detailed behavioral analysis, we find that batching suppresses overthinking, reduces hedging language (e.g., repetitive self-corrections), and encourages more decisive answers. Surprisingly, we also observe emergent collective effects in batched inference: models often generalize patterns from earlier examples to solve harder ones in the same batch. These findings position batching not just as a throughput optimization, but as a powerful inference-time regularizer for more efficient and reliable LLM reasoning.
翻译:近期研究探索了批量提示作为大型语言模型(LLMs)中分摊推理成本的一种策略。本文中,我们表明批处理提供了一个未被充分认识的额外优势:它在大型推理模型(LRMs)的多步推理过程中规范化了模型行为。我们在13个多样化基准测试中进行了全面研究,观察到批处理提高了准确性,同时显著减少了推理令牌使用量,通常降低3至5倍。通过详细的行为分析,我们发现批处理抑制了过度思考,减少了模棱两可的语言表达(例如重复的自我修正),并鼓励更果断的答案。令人惊讶的是,我们还观察到批处理推理中出现了集体涌现效应:模型经常从同一批次中较早的示例中泛化模式来解决更难的示例。这些发现将批处理定位为不仅是一种吞吐量优化手段,更是一种强大的推理时规范化器,可实现更高效可靠的LLM推理。