Server breaches are an unfortunate reality on today's Internet. In the context of deep neural network (DNN) models, they are particularly harmful, because a leaked model gives an attacker "white-box" access to generate adversarial examples, a threat model that has no practical robust defenses. For practitioners who have invested years and millions into proprietary DNNs, e.g. medical imaging, this seems like an inevitable disaster looming on the horizon. In this paper, we consider the problem of post-breach recovery for DNN models. We propose Neo, a new system that creates new versions of leaked models, alongside an inference time filter that detects and removes adversarial examples generated on previously leaked models. The classification surfaces of different model versions are slightly offset (by introducing hidden distributions), and Neo detects the overfitting of attacks to the leaked model used in its generation. We show that across a variety of tasks and attack methods, Neo is able to filter out attacks from leaked models with very high accuracy, and provides strong protection (7--10 recoveries) against attackers who repeatedly breach the server. Neo performs well against a variety of strong adaptive attacks, dropping slightly in # of breaches recoverable, and demonstrates potential as a complement to DNN defenses in the wild.
翻译:服务器的破坏是当今互联网上不幸的现实。 在深层神经网络(DNN)模型的背景下,它们特别有害, 因为泄漏模型提供了攻击者“ 白箱” 访问机会, 以生成对抗性实例, 这是一种没有实际强力防御的威胁模型。 对于已经投资多年和数百万名的专有DNN(如医学成像)的从业者来说, 这似乎是一场不可避免的灾难, 即将在地平线上出现。 在本文中, 我们考虑DNN模型的突破后恢复问题。 我们提议建立一个新系统, 创建一个新版本的泄漏模型, 以及一个测出和删除先前泄漏模型上产生的对抗性实例的推断时间过滤器。 不同模型的分类面稍稍有抵消( 引入隐蔽分布), 尼欧检测到攻击过度适应性模型的版本。 我们从各种各样的任务和攻击方法中, 尼奥能够非常精确地从泄漏的模型中过滤出攻击, 并提供强有力的保护( 7- 10 恢复) 攻击者反复破坏服务器的触发者 。 NEO 运行了各种强大的防御性攻击,, 以可恢复为强力的防御性攻击, 。