Multimodal Large Language Models (MLLMs) show strong potential for interpreting and interacting with complex, pixel-rich Graphical User Interface (GUI) environments. However, building agents that are both efficient for high-level tasks and precise for fine-grained interactions remains challenging. GUI agents must perform routine actions efficiently while also handling tasks that demand exact visual grounding, yet existing approaches struggle when accuracy depends on identifying specific interface elements. These MLLMs also remain large and cannot adapt their reasoning depth to the task at hand. In this work, we introduce iSHIFT: Implicit Slow-fast Hybrid Inference with Flexible Tokens, a lightweight agent that integrates latent thinking (implicit chain-of-thought) with a perception control module. iSHIFT enables an MLLM to switch between a slow mode, which leverages detailed visual grounding for high precision and a fast mode that uses global cues for efficiency. Special perception tokens guide attention to relevant screen regions, allowing the model to decide both how to reason and where to focus. Despite its compact 2.5B size, iSHIFT matches state-of-the-art performance on multiple benchmark datasets.
翻译:多模态大语言模型(MLLMs)在理解和交互复杂的、像素丰富的图形用户界面(GUI)环境方面展现出强大潜力。然而,构建既能高效处理高级任务又能精确执行细粒度交互的代理仍然具有挑战性。GUI代理必须高效执行常规操作,同时处理需要精确视觉定位的任务,然而现有方法在准确性依赖于识别特定界面元素时往往表现不佳。这些MLLMs通常规模庞大,且无法根据当前任务调整其推理深度。本文提出iSHIFT:基于灵活令牌的隐式慢-快混合推理,这是一种集成潜在思维(隐式思维链)与感知控制模块的轻量级代理。iSHIFT使MLLM能够在两种模式间切换:慢速模式利用详细的视觉定位实现高精度,而快速模式则利用全局线索以提高效率。特殊的感知令牌引导注意力聚焦于相关的屏幕区域,使模型能够同时决定如何推理以及关注何处。尽管其参数量仅为紧凑的25亿,iSHIFT在多个基准数据集上达到了最先进的性能水平。