Automated model-based test generation presents a viable alternative to the costly manual test creation currently employed for regression testing of web apps. However, existing model inference techniques rely on threshold-based whole-page comparison to establish state equivalence, which cannot reliably identify near-duplicate web pages in modern web apps. Consequently, existing techniques produce inadequate models for dynamic web apps, and fragile test oracles, rendering the generated regression test suites ineffective. We propose a model-based test generation technique, FRAGGEN, that eliminates the need for thresholds, by employing a novel state abstraction based on page fragmentation to establish state equivalence. FRAGGEN also uses fine-grained page fragment analysis to diversify state exploration and generate reliable test oracles. Our evaluation shows that FRAGGEN outperforms existing whole-page techniques by detecting more near-duplicates, inferring better web app models and generating test suites that are better suited for regression testing. On a dataset of 86,165 state-pairs, FRAGGEN detected 123% more near-duplicates on average compared to whole-page techniques. The crawl models inferred by FRAGGEN have 62% more precision and 70% more recall on average. FRAGGEN also generates reliable regression test suites with test actions that have nearly 100% success rate on the same version of the web app even if the execution environment is varied. The test oracles generated by FRAGGEN can detect 98.7% of the visible changes in web pages while being highly robust, making them suitable for regression testing.
翻译:自动模型测试生成是目前用于网络应用程序回归测试的昂贵人工测试的替代方法。然而,现有的模型推断技术依赖于基于门槛的全页比较,以建立州等值,无法可靠地识别现代网络应用程序中近复制的网页。因此,现有技术为动态网络应用程序和脆弱的测试神器生成了不适当的模型,使得生成的回归测试套件无效。我们建议采用基于模型的测试生成技术FRAGGGEN,通过使用基于页面破碎的新的国家抽象模型来建立州等值来消除阈值需求。FRAGGEN还使用精密的页面碎片分析来使州探索多样化并产生可靠的测试或触雷器。我们的评估表明,FRAGGEN通过探测更接近的复制件,推导出更好的网络应用程序模型和生成更适合回归测试的测试套件。在86,165个州级的数据集中,FRAGGEN还检测了123 % 与整页技术相比平均接近的复制件数分析。通过100个更精确的精确度模型来测试现有现有的全局技术。