Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose \textit{Spotlight}, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen -- the focus -- as the input. This general architecture is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction.
翻译:移动 UI 理解对于促成诸如 UI 自动化和无障碍等各种互动任务很重要。 以前的移动 UI 模式往往取决于屏幕的视图层次信息, 它直接提供 UI 的结构数据, 希望绕过屏幕像素的视觉模型挑战性任务。 但是, 并不总能找到层次结构, 并且往往会因为缺少对象描述或结构错误信息而腐蚀。 因此, 尽管使用视觉等级结构可以带来短期收益, 但它最终可能阻碍模型的可适用性和性。 在本文中, 我们提议了\ textit{Spotlight}, 这是一种只考虑UI 结构结构结构结构结构的结构。 具体地说, 我们增强一个只考虑 UI 的截图和屏幕上感兴趣的区域 -- -- 焦点 -- 作为输入内容。 这个总体结构很容易扩展, 并且能够执行一系列 UI 模型的任务。 我们的实验表明, 我们的模式可以建立SOTA 的结果, 并且超越了以前使用截图和查看 移动 UI IM 理解的图像方法, 展示了 快速学习 能力 。