Omnimodal large language models have made significant strides in unifying audio and visual modalities; however, they often lack the fine-grained cross-modal understanding and have difficulty with multimodal alignment. To address these limitations, we introduce OmniAgent, a fully audio-guided active perception agent that dynamically orchestrates specialized tools to achieve more fine-grained audio-visual reasoning. Unlike previous works that rely on rigid, static workflows and dense frame-captioning, this paper demonstrates a paradigm shift from passive response generation to active multimodal inquiry. OmniAgent employs dynamic planning to autonomously orchestrate tool invocation on demand, strategically concentrating perceptual attention on task-relevant cues. Central to our approach is a novel coarse-to-fine audio-guided perception paradigm, which leverages audio cues to localize temporal events and guide subsequent reasoning. Extensive empirical evaluations on three audio-video understanding benchmarks demonstrate that OmniAgent achieves state-of-the-art performance, surpassing leading open-source and proprietary models by substantial margins of 10% - 20% accuracy.
翻译:全模态大语言模型在统一音频与视觉模态方面取得了显著进展,然而它们通常缺乏细粒度的跨模态理解能力,并且在多模态对齐方面存在困难。为应对这些局限,我们提出了OmniAgent,一个完全由音频引导的主动感知智能体,它能够动态编排专用工具以实现更细粒度的音视频推理。与以往依赖僵化静态工作流和密集帧-描述生成的研究不同,本文展示了从被动响应生成到主动多模态查询的范式转变。OmniAgent采用动态规划方法,按需自主编排工具调用,策略性地将感知注意力集中在任务相关的线索上。我们方法的核心是一种新颖的由粗到细的音频引导感知范式,该范式利用音频线索定位时序事件并引导后续推理。在三个音视频理解基准上进行的大量实证评估表明,OmniAgent实现了最先进的性能,以10%至20%的准确率优势显著超越了领先的开源和专有模型。