Humans can learn to operate the user interface (UI) of an application by reading an instruction manual or how-to guide. Along with text, these resources include visual content such as UI screenshots and images of application icons referenced in the text. We explore how to leverage this data to learn generic visio-linguistic representations of UI screens and their components. These representations are useful in many real applications, such as accessibility, voice navigation, and task automation. Prior UI representation models rely on UI metadata (UI trees and accessibility labels), which is often missing, incompletely defined, or not accessible. We avoid such a dependency, and propose Lexi, a pre-trained vision and language model designed to handle the unique features of UI screens, including their text richness and context sensitivity. To train Lexi we curate the UICaption dataset consisting of 114k UI images paired with descriptions of their functionality. We evaluate Lexi on four tasks: UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition.
翻译:人类可以通过阅读指导手册或如何使用指南来学会操作应用程序的用户界面(UI) 。 与文本一起, 这些资源包括图像内容, 如 UI 屏幕截图和文本中引用的应用图标的图像。 我们探索如何利用这些数据学习UI 屏幕及其组件的通用语言表达方式。 这些表达方式在许多实际应用中非常有用, 如无障碍、语音导航和任务自动化。 之前的 UI 代表模式依赖于UI 元数据( UI 树和无障碍标签), 这些数据往往缺失、 定义不完整或无法获取。 我们避免了这样的依赖性, 我们提出了Lexi, 这是一种预先培训的视觉和语言模型, 旨在处理 UI 屏幕的独特特征, 包括它们的文字丰富性和上下文敏感性。 要培训 Lexi, 我们将UI Caption 数据集的114k 界面图像与功能描述相配对。 我们评估 Lexi 的四项任务: UI 行动包含 、 基于 指令的 UIE 图像检索、 定位表示方式 和 UII 实体识别 识别 。