Open software supply chain attacks, once successful, can exact heavy costs in mission-critical applications. As open-source ecosystems for deep learning flourish and become increasingly universal, they present attackers previously unexplored avenues to code-inject malicious backdoors in deep neural network models. This paper proposes Flareon, a small, stealthy, seemingly harmless code modification that specifically targets the data augmentation pipeline with motion-based triggers. Flareon neither alters ground-truth labels, nor modifies the training loss objective, nor does it assume prior knowledge of the victim model architecture, training data, and training hyperparameters. Yet, it has a surprisingly large ramification on training -- models trained under Flareon learn powerful target-conditional (or "any2any") backdoors. The resulting models can exhibit high attack success rates for any target choices and better clean accuracies than backdoor attacks that not only seize greater control, but also assume more restrictive attack capabilities. We also demonstrate the effectiveness of Flareon against recent defenses. Flareon is fully open-source and available online to the deep learning community: https://github.com/lafeat/flareon.
翻译:开放的软件供应链攻击一旦成功,就有可能在任务关键应用中造成高昂的成本。作为深入学习的开放源码生态系统繁荣并日益普遍化,它们向攻击者展示了以前在深神经网络模型中未经探索的代码输入恶意后门的途径。本文提出Flareon,这是一个小型的、隐形的、看似无害的代码修改,具体针对基于运动的触发器的数据扩增管道。Flareon既不改变地面真实标签,也不改变培训损失目标,也不假定事先了解受害者模型结构、培训数据和培训超参数。然而,它却在培训方面有一个令人惊讶的庞大布局 -- -- 在Flareon下培训的模型学习强力目标性条件(或“无孔”)后门。由此产生的模型可以展示出任何目标选择的高攻击成功率,并且比后门攻击更干净的防线要好,这不仅能控制更大,而且具有更严格的攻击能力。我们还展示了Flareon对最近防御的有效性。Flareon完全开放源,并且可以在网上向深层学习社区提供: https://giath/flaflathareareareareareare.com。