We present Voice Evaluation of Reasoning Ability (VERA), a benchmark for evaluating reasoning ability in voice-interactive systems under real-time conversational constraints. VERA comprises 2,931 voice-native episodes derived from established text benchmarks and organized into five tracks (Math, Web, Science, Long-Context, Factual). Each item is adapted for speech interaction while preserving reasoning difficulty. VERA enables direct text-voice comparison within model families and supports analysis of how architectural choices affect reliability. We assess 12 contemporary voice systems alongside strong text baselines and observe large, consistent modality gaps: on competition mathematics a leading text model attains 74.8% accuracy while its voice counterpart reaches 6.1%; macro-averaged across tracks the best text models achieve 54.0% versus 11.3% for voice. Latency-accuracy analyses reveal a low-latency plateau, where fast voice systems cluster around ~10% accuracy, while approaching text performance requires sacrificing real-time interaction. Diagnostic experiments indicate that common mitigations are insufficient. Increasing "thinking time" yields negligible gains; a decoupled cascade that separates reasoning from narration improves accuracy but still falls well short of text and introduces characteristic grounding/consistency errors. Failure analyses further show distinct error signatures across native streaming, end-to-end, and cascade designs. VERA provides a reproducible testbed and targeted diagnostics for architectures that decouple thinking from speaking, offering a principled way to measure progress toward real-time voice assistants that are both fluent and reliably reasoned.
翻译:我们提出了语音推理能力评估(VERA),这是一个在实时对话约束下评估语音交互系统推理能力的基准。VERA包含2,931个源自现有文本基准的语音原生测试单元,并组织为五个赛道(数学、网络、科学、长上下文、事实)。每个项目都针对语音交互进行了适配,同时保留了推理难度。VERA支持在模型家族内进行直接的文本-语音性能比较,并有助于分析架构选择如何影响可靠性。我们评估了12个当代语音系统以及强大的文本基线,观察到了巨大且一致的模态差距:在竞赛数学问题上,领先的文本模型达到了74.8%的准确率,而其语音对应模型仅为6.1%;在所有赛道上进行宏平均,最佳文本模型达到54.0%,而语音模型仅为11.3%。延迟-准确率分析揭示了一个低延迟平台期,即快速语音系统的准确率集中在约10%左右,而要接近文本性能则需要牺牲实时交互性。诊断性实验表明,常见的缓解措施效果不足。增加“思考时间”带来的收益微乎其微;将推理与叙述解耦的级联架构提高了准确率,但仍远低于文本水平,并引入了典型的接地/一致性错误。故障分析进一步揭示了原生流式、端到端和级联设计之间不同的错误特征。VERA为那些将思考与说话解耦的架构提供了一个可复现的测试平台和针对性诊断工具,为衡量朝着既流畅又具备可靠推理能力的实时语音助手的发展进程提供了一种原则性的方法。