Navigating mobile User Interface (UI) applications using large language and vision models based on high-level goal instructions is emerging as an important research field with significant practical implications, such as digital assistants and automated UI testing. To evaluate the effectiveness of existing models in mobile UI navigation, benchmarks are required and widely used in the literature. Although multiple benchmarks have been recently established for evaluating functional correctness being judged as pass or fail, they fail to address the need for multi-dimensional evaluation of the entire UI navigation process. Furthermore, other exiting related datasets lack an automated and robust benchmarking suite, making the evaluation process labor-intensive and error-prone. To address these issues, in this paper, we propose a new benchmark named Sphinx for multi-dimensional evaluation of existing models in practical UI navigation. Sphinx provides a fully automated benchmarking suite that enables reproducibility across real-world mobile apps and employs reliable evaluators to assess model progress. In addition to functional correctness, Sphinx includes comprehensive toolkits for multi-dimensional evaluation, such as invariant-based verification, knowledge probing, and knowledge-augmented generation to evaluate model capabilities including goal understanding, knowledge and planning, grounding, and instruction following, ensuring a thorough assessment of each sub-process in mobile UI navigation. We benchmark 8 large language and multi-modal models with 13 different configurations on Sphinx. Evaluation results show that all these models struggle on Sphinx, and fail on all test generation tasks. Our further analysis of the multi-dimensional evaluation results underscores the current progress and highlights future research directions to improve a model's effectiveness for mobile UI navigation.
翻译:暂无翻译