Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
翻译:软件文件的现有自动化技术通常试图在两种主要信息来源:代码和自然语言之间加以解释。然而,这种推理过程往往由于比较抽象的自然语言和结构化的编程语言之间的词典差距而变得复杂。这种差距的一个潜在桥梁是图形用户界面(GUI),因为图形用户界面内在地将关于基本程序功能的突出信息编码成丰富的像素数据表。本文是对图形用户界面与软件的功能性自然语言描述之间的联系的首次全面经验调查之一。首先,我们收集、分析和公开源收集大量功能性图形界面描述数据集,其中包括10,204个通用用户应用程序的截图的45,998个描述。这些描述是从人类标签上获得的,并经历了若干质量控制机制。为了深入了解图形用户界面的代表性潜力,我们调查了四个神经图像显示模型在提供截图时预测不同颗粒性的自然语言描述的能力。我们用通用机器翻译指标定量评估这些模型,并通过大规模用户研究定性评估这些模型。最后,我们提供了通过多式联运模型展示的自动软件的潜力,以加强未来的技术。