Logic locking has been proposed to safeguard intellectual property (IP) during chip fabrication. Logic locking techniques protect hardware IP by making a subset of combinational modules in a design dependent on a secret key that is withheld from untrusted parties. If an incorrect secret key is used, a set of deterministic errors is produced in locked modules, restricting unauthorized use. A common target for logic locking is neural accelerators, especially as machine-learning-as-a-service becomes more prevalent. In this work, we explore how logic locking can be used to compromise the security of a neural accelerator it protects. Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors. To do so, we first outline a motivational attack scenario where a carefully chosen incorrect key, which we call a trojan key, produces misclassifications for an attacker-specified input class in a locked accelerator. We then develop a theoretically-robust attack methodology to automatically identify trojan keys. To evaluate this attack, we launch it on several locked accelerators. In our largest benchmark accelerator, our attack identified a trojan key that caused a 74\% decrease in classification accuracy for attacker-specified trigger inputs, while degrading accuracy by only 1.7\% for other inputs on average.
翻译:逻辑锁定被提出用于在芯片制造过程中保护知识产权。逻辑锁定技术通过使设计中的一组组合模块依赖于被隐瞒不透露给不受信任方的秘密密钥来保护硬件知识产权。如果使用错误的密钥,则会在锁定的模块中产生一组确定性错误,限制未经授权的使用。在机器学习即服务越来越普遍的情况下,神经加速器是逻辑锁定的常见目标。在这项工作中,我们探讨了如何利用逻辑锁定来危及其所保护的神经加速器的安全。具体来说,我们展示了如何利用使用不正确的密钥产生的确定性错误来产生神经跑马策略样式的后门。为此,我们首先概述了一个有动机的攻击场景,其中精心选择的不正确密钥(称为跑马策略密钥)会导致锁定的加速器对攻击者指定的输入类进行误分类。然后,我们开发了一种理论上健壮的攻击方法来自动识别跑马策略密钥。为了评估此攻击,我们在多个锁定加速器上启动它。在我们最大的基准加速器中,我们的攻击发现了一个跑马策略密钥,导致攻击者指定的触发输入的分类精度下降了74%,同时平均降低了1.7%的其他输入的分类精度。