We present M3-SLU, a new multimodal large language model (MLLM) benchmark for evaluating multi-speaker, multi-turn spoken language understanding. While recent models show strong performance in speech and text comprehension, they still struggle with speaker-attributed reasoning, the ability to understand who said what and when in natural conversations. M3-SLU is built from four open corpora (CHiME-6, MELD, MultiDialog, and AMI) and comprises over 12,000 validated instances with paired audio, transcripts, and metadata. It includes two tasks: (1) Speaker-Attributed Question Answering and (2) Speaker Attribution via Utterance Matching. We provide baseline results for both cascaded pipelines and end-to-end MLLMs, evaluated using an LLM-as-Judge and accuracy metrics. Results show that while models can capture what was said, they often fail to identify who said it, revealing a key gap in speaker-aware dialogue understanding. M3-SLU offers as a challenging benchmark to advance research in speaker-aware multimodal understanding.
翻译:我们提出了M3-SLU,这是一个用于评估多说话人、多轮次口语理解能力的新型多模态大语言模型基准。尽管近期模型在语音和文本理解方面表现出色,但它们仍在说话人归属推理——即理解自然对话中谁在何时说了什么的能力——上存在困难。M3-SLU基于四个公开语料库(CHiME-6、MELD、MultiDialog和AMI)构建,包含超过12,000个经过验证的实例,每个实例均配有音频、转录文本和元数据。它包含两项任务:(1)说话人归属问答与(2)通过话语匹配进行说话人归属。我们为级联流水线和端到端多模态大语言模型提供了基线结果,并使用LLM-as-Judge和准确率指标进行评估。结果表明,虽然模型能够捕捉所说的内容,但常常无法识别说话人身份,这揭示了说话人感知对话理解中的一个关键差距。M3-SLU作为一个具有挑战性的基准,旨在推动说话人感知多模态理解的研究进展。