The increasing usage of machine learning models raises the question of the reliability of these models. The current practice of testing with limited data is often insufficient. In this paper, we provide a framework for automated test data synthesis to test black-box ML/DL models. We address an important challenge of generating realistic user-controllable data with model agnostic coverage criteria to test a varied set of properties, essentially to increase trust in machine learning models. We experimentally demonstrate the effectiveness of our technique.
翻译:越来越多地使用机器学习模型提出了这些模型的可靠性问题,目前用有限数据进行测试的做法往往不够充分,在本文中,我们为自动测试数据合成提供了一个框架,以测试黑盒 ML/DL模型,我们应对一项重大挑战,即利用模型不可知性覆盖标准生成现实的用户可控数据,以测试各种特性,主要是增强对机器学习模型的信任。我们实验性地展示了我们技术的有效性。