With the rapid development of mobile intelligent assistant technologies, multi-modal AI assistants have become essential interfaces for daily user interactions. However, current evaluation methods face challenges including high manual costs, inconsistent standards, and subjective bias. This paper proposes an automated multi-modal evaluation framework based on large language models and multi-agent collaboration. The framework employs a three-tier agent architecture consisting of interaction evaluation agents, semantic verification agents, and experience decision agents. Through supervised fine-tuning on the Qwen3-8B model, we achieve a significant evaluation matching accuracy with human experts. Experimental results on eight major intelligent agents demonstrate the framework's effectiveness in predicting users' satisfaction and identifying generation defects.
翻译:随着移动智能助手技术的快速发展,多模态AI助手已成为用户日常交互的重要界面。然而,现有评估方法面临人工成本高、标准不一致及主观偏差等挑战。本文提出一种基于大语言模型与多智能体协作的自动化多模态评估框架。该框架采用三层智能体架构,包含交互评估智能体、语义验证智能体与体验决策智能体。通过对Qwen3-8B模型进行监督微调,我们实现了与人类专家评估结果高度匹配的准确率。在八大主流智能助手上的实验结果表明,该框架能有效预测用户满意度并识别生成缺陷。