This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks characterize themselves as "real world." Despite this framing, however, we found the physical or real-world testing conducted was minimal, provided few details about testing subjects and was often conducted as an afterthought or demonstration. Adversarial ML research without representative trials or testing is an ethical, scientific, and health/safety issue that can cause real harms. We introduce the problem and our methodology, and then critique the physical domain testing methodologies employed by papers in the field. We then explore various barriers to more inclusive physical testing in adversarial ML and offer recommendations to improve such testing notwithstanding these challenges.
翻译:本文批判性地评估了各种对抗性机器学习(ML)对涉及人类主体的计算机视觉系统的攻击的物理域测试是否适当和具有代表性。许多使用这种攻击的论文都称自己为“现实世界 ” 。然而,尽管如此,我们发现实际或现实世界的测试很少,很少提供测试科目的细节,而且往往是事后思考或示范进行的。没有代表性试验或测试的反转ML研究是一个伦理、科学和健康/安全问题,可能造成真正的伤害。我们提出问题和方法,然后批评实地论文使用的物理域测试方法。然后,我们探索在对抗性ML进行更具包容性的物理测试的各种障碍,并提出建议,以克服这些挑战,改进这种测试。