Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchy is not always available, and is often corrupted with missing object descriptions or misaligned bounding box positions. As a result, although using view hierarchy offers some short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model obtains SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as input. Furthermore, we explore the multi-task learning and few-shot prompting capacity of the proposed models, demonstrating promising results in the multi-task learning direction.
翻译:移动 UI 理解对于促成诸如 UI 自动化和无障碍等各种互动任务很重要。 以前的移动 UI 模式往往取决于屏幕的视觉层次信息, 即直接提供 UI 结构数据的屏幕的视觉层次信息, 从而直接提供 UI 的结构数据, 希望绕过屏幕像素的视觉模型挑战性任务。 然而, 观点层次并不总是可以获得, 并且往往会因缺少对象描述或错开框框框位置而腐蚀。 结果, 虽然使用视图等级可以带来一些短期收益, 但最终会妨碍模型的适用和性能。 在本文中, 我们提议了“ 点光”, 即移动 UI 理解的仅视线方法。 具体地说, 我们强化了一个只从屏幕上截取 UI 截图的截图和关注区域 -- -- 焦点 -- 作为输入。 这个总体结构很容易缩放, 并且能够执行一系列 UI 模型的模范建模任务。 我们的实验显示, 我们的模型在多个有代表性的 UI 任务上取得了 SoTA 的结果, 并超越了先前的方法, 既使用屏幕截图, 也把 挂图作为输入 。 此外, 我们探索了多任务 学习模式的多任务 和几号 快速 学习 。