Screen recordings are becoming increasingly important as rich software artifacts that inform mobile application development processes. However, the amount of manual effort required to extract information from these graphical artifacts can hinder resource-constrained mobile developers. This paper presents Video2Scenario (V2S), an automated tool that processes video recordings of Android app usages, utilizes neural object detection and image classification techniques to classify the depicted user actions, and translates these actions into a replayable scenario. We conducted a comprehensive evaluation to demonstrate V2S's ability to reproduce recorded scenarios across a range of devices and a diverse set of usage cases and applications. The results indicate that, based on its performance with 175 videos depicting 3,534 GUI-based actions, V2S is accurate in reproducing $\approx$89\% of actions from collected videos.
翻译:屏幕记录越来越重要,因为丰富的软件人工制品为移动应用开发过程提供了信息。然而,从这些图形人工制品中提取信息所需的人工努力量会妨碍资源限制的流动开发者。本文展示了视频2Scenario(V2S),这是一个自动工具,用于处理Android App使用情况的视频记录,使用神经物体探测和图像分类技术对描述的用户行动进行分类,并将这些行动转化为可重播的情景。我们进行了全面评价,以显示V2S在一系列装置和多种使用案例和应用中复制记录到的情景的能力。结果显示,根据其175个视频的性能,V2S从收集的视频中复制了3 534个基于图形的行动。