Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial attacks in realistic conditions. However, these simulations are non-differentiable, forcing researchers to create attacks that do not integrate simulation environmental factors, reducing attack success. To address this limitation, we introduce UNDREAM, the first software framework that bridges the gap between photorealistic simulators and differentiable renderers to enable end-to-end optimization of adversarial perturbations on any 3D objects. UNDREAM enables manipulation of the environment by offering complete control over weather, lighting, backgrounds, camera angles, trajectories, and realistic human and object movements, thereby allowing the creation of diverse scenes. We showcase a wide array of distinct physically plausible adversarial objects that UNDREAM enables researchers to swiftly explore in different configurable environments. This combination of photorealistic simulation and differentiable optimization opens new avenues for advancing research of physical adversarial attacks.
翻译:在自动驾驶等安全关键应用中部署的深度学习模型通常借助仿真来测试其在真实条件下对抗对抗攻击的鲁棒性。然而,这些仿真过程不可微分,迫使研究人员构建的攻击无法整合仿真环境因素,从而降低了攻击成功率。为克服这一局限,我们提出了UNDREAM,这是首个连接真实感仿真器与可微分渲染器的软件框架,能够实现对任意三维物体上对抗扰动的端到端优化。UNDREAM通过提供对天气、光照、背景、相机角度、轨迹以及真实人类与物体运动的完全控制,实现了对环境参数的灵活操纵,从而支持创建多样化的场景。我们展示了UNDREAM使研究人员能够在不同可配置环境中快速探索的多种具有物理合理性的独特对抗物体。这种真实感仿真与可微分优化的结合为推进物理对抗攻击研究开辟了新途径。