Screen recordings of mobile applications are easy to capture and include a wealth of information, making them a popular mechanism for users to inform developers of the problems encountered in the bug reports. However, watching the bug recordings and efficiently understanding the semantics of user actions can be time-consuming and tedious for developers. Inspired by the conception of the video subtitle in movie industry, we present a lightweight approach CAPdroid to caption bug recordings automatically. CAPdroid is a purely image-based and non-intrusive approach by using image processing and convolutional deep learning models to segment bug recordings, infer user action attributes, and generate subtitle descriptions. The automated experiments demonstrate the good performance of CAPdroid in inferring user actions from the recordings, and a user study confirms the usefulness of our generated step descriptions in assisting developers with bug replay.
翻译:移动应用程序的屏幕记录很容易捕捉,并包含大量信息,使这些记录成为用户向开发者通报错误报告所遇到问题的流行机制。然而,观察错误记录和有效理解用户行动的语义对于开发者来说可能耗时且乏味。在电影业视频字幕概念的启发下,我们提出了一个轻量级的电脑机器人方法来自动描述错误记录。Capdroid是一种纯粹基于图像和非侵扰性的方法,它利用图像处理和进化深层次学习模型来分割错误记录、推断用户行动属性和制作字幕描述。自动实验表明Capidroid在从录音中推断用户行动方面表现良好,用户研究证实了我们制作的步式描述在协助开发者重播错误方面非常有用。