Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction.
翻译:移动 UI 理解对于促成诸如 UI 自动化和无障碍等各种互动任务很重要。 以前的移动 UI 建模往往取决于屏幕的视图层次信息, 它直接提供 UI 的结构数据, 希望从屏幕像素中绕过具有挑战性的视觉建模任务。 但是, 并不总能找到图像等级, 并且往往以缺失的天体描述或错误的结构信息来腐蚀。 因此, 尽管使用视觉等级系统可以带来短期收益, 但它最终可能阻碍模型的适用和性能。 在本文中, 我们提出“ 聚焦点”, 即移动 UI 理解的唯有愿景的方法。 具体地说, 我们增强一个仅将 UI 的截图和屏幕上感兴趣的区域 -- -- 焦点 -- 作为输入的愿景语言模型。 Spotlight 的总结构很容易缩放, 并且能够执行一系列的 UI 建模任务。 我们的实验显示, 我们的模型在多个具有代表性的 UI 任务上确立了 SoTA 的结果, 并且超越了以前使用少数的截图和查看头图作为投入的方法。 此外, 我们探索了多式学习的多功能。</s>