Text-to-Audio-Video (T2AV) generation aims to synthesize temporally coherent video and semantically synchronized audio from natural language, yet its evaluation remains fragmented, often relying on unimodal metrics or narrowly scoped benchmarks that fail to capture cross-modal alignment, instruction following, and perceptual realism under complex prompts. To address this limitation, we present T2AV-Compass, a unified benchmark for comprehensive evaluation of T2AV systems, consisting of 500 diverse and complex prompts constructed via a taxonomy-driven pipeline to ensure semantic richness and physical plausibility. Besides, T2AV-Compass introduces a dual-level evaluation framework that integrates objective signal-level metrics for video quality, audio quality, and cross-modal alignment with a subjective MLLM-as-a-Judge protocol for instruction following and realism assessment. Extensive evaluation of 11 representative T2AVsystems reveals that even the strongest models fall substantially short of human-level realism and cross-modal consistency, with persistent failures in audio realism, fine-grained synchronization, instruction following, etc. These results indicate significant improvement room for future models and highlight the value of T2AV-Compass as a challenging and diagnostic testbed for advancing text-to-audio-video generation.
翻译:文本-音频-视频(T2AV)生成旨在从自然语言合成时序连贯的视频与语义同步的音频,然而其评估仍处于碎片化状态,通常依赖单模态指标或范围狭窄的基准,难以捕捉复杂提示下的跨模态对齐、指令遵循及感知真实性。为应对此局限,我们提出T2AV-Compass——一个用于全面评估T2AV系统的统一基准,包含500个通过分类学驱动流程构建的多样化复杂提示,以确保语义丰富性与物理合理性。此外,T2AV-Compass引入了双层评估框架,该框架整合了针对视频质量、音频质量及跨模态对齐的客观信号级指标,以及用于指令遵循和真实性评估的主观MLLM-as-a-Judge协议。对11个代表性T2AV系统的广泛评估表明,即使最强模型在人类级真实感与跨模态一致性方面仍存在显著差距,在音频真实感、细粒度同步、指令遵循等方面持续存在不足。这些结果揭示了未来模型的巨大改进空间,并彰显了T2AV-Compass作为推动文本-音频-视频生成发展的挑战性诊断测试平台的价值。