Mobile applications have become an essential part of our daily lives, making ensuring their quality an important activity. Graphical User Interface (GUI) testing is a quality assurance method that has frequently been used for mobile apps. When conducting GUI testing, it is important to generate effective text inputs for the text-input components. Some GUIs require these text inputs to be able to move from one page to the next: This can be a challenge to achieving complete UI exploration. Recently, Large Language Models (LLMs) have demonstrated excellent text-generation capabilities. To the best of our knowledge, there has not yet been any empirical study to evaluate different pre-trained LLMs' effectiveness at generating text inputs for mobile GUI testing. This paper reports on a large-scale empirical study that extensively investigates the effectiveness of nine state-of-the-art LLMs in Android text-input generation for UI pages. We collected 114 UI pages from 62 open-source Android apps and extracted contextual information from the UI pages to construct prompts for LLMs to generate text inputs. The experimental results show that some LLMs can generate more effective and higher-quality text inputs, achieving a 50.58% to 66.67% page-pass-through rate (PPTR). We also found that using more complete UI contextual information can increase the PPTRs of LLMs for generating text inputs. We conducted an experiment to evaluate the bug-detection capabilities of LLMs by directly generating invalid text inputs. We collected 37 real-world bugs related to text inputs. The results show that using LLMs to directly generate invalid text inputs for bug detection is insufficient: The bug-detection rates of the nine LLMs are all less than 23%. In addition, we also describe six insights gained regarding the use of LLMs for Android testing: These insights will benefit the Android testing community.
翻译:暂无翻译