Agent-based models play an important role in simulating complex emergent phenomena and supporting critical decisions. In this context, a software fault may result in poorly informed decisions that lead to disastrous consequences. The ability to rigorously test these models is therefore essential. In this systematic literature review, we answer five research questions related to the key aspects of test case generation in agent-based models: What are the information artifacts used to generate tests? How are these tests generated? How is a verdict assigned to a generated test? How is the adequacy of a generated test suite measured? What level of abstraction of an agent-based model is targeted by a generated test? Our results show that whilst the majority of techniques are effective for testing functional requirements at the agent and integration levels of abstraction, there are comparatively few techniques capable of testing society-level behaviour. Additionally, we identify a need for more thorough evaluation using realistic case studies that feature challenging properties associated with a typical agent-based model.
翻译:基于代理的模型在模拟复杂的紧急现象和支持关键决定方面发挥着重要作用。在这方面,软件缺陷可能导致不知情的决定,导致灾难性后果。因此,严格测试这些模型的能力至关重要。在系统文献审查中,我们回答五个研究问题,这些问题涉及在基于代理的模型中测试案例生成的关键方面:用于生成测试的信息文物是什么?这些测试是如何产生的?如何为生成的测试分配判断?生成的测试套件的充足性如何测量?生成的测试套件的足够性如何被测得?基于代理的模型的抽象性能如何被生成的测试作为目标?我们的结果显示,虽然大多数技术对于在抽象的代理和整合层面测试功能要求是有效的,但能够测试社会层面行为的技术相对较少。此外,我们确定需要利用现实的案例研究进行更彻底的评估,这些案例研究具有挑战性的特性与典型的基于代理的模型相关特性。