Automated testing tools typically create test cases that are different from what human testers create. This often makes the tools less effective, the created tests harder to understand, and thus results in tools providing less support to human testers. Here, we propose a framework based on cognitive science and, in particular, an analysis of approaches to problem-solving, for identifying cognitive processes of testers. The framework helps map test design steps and criteria used in human test activities and thus to better understand how effective human testers perform their tasks. Ultimately, our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems. We posit that such systems can better augment and support testers in a way that is meaningful to them.
翻译:自动测试工具通常产生不同于人类测试者所创造的测试案例。 这往往使工具变得不那么有效,创造的测试更难理解,从而导致为人类测试者提供较少支持的工具。 在这里,我们提出了一个基于认知科学的框架,特别是基于对解决问题的方法的分析,以辨别测试者的认知过程。这个框架有助于绘制人类测试活动中所使用的测试设计步骤和标准,从而更好地了解人类测试者如何有效地执行任务。最终,我们的目标是能够模仿人类如何创造测试案例,从而设计出更像人类的自动测试生成系统。我们假设,这样的系统能够以对其有意义的方式更好地增强和支持测试者。