The deep neural network (DNN) models are deemed confidential due to their unique value in expensive training efforts, privacy-sensitive training data, and proprietary network characteristics. Consequently, the model value raises incentive for adversary to steal the model for profits, such as the representative model extraction attack. Emerging attack can leverage timing-sensitive architecture-level events (i.e., Arch-hints) disclosed in hardware platforms to extract DNN model layer information accurately. In this paper, we take the first step to uncover the root cause of such Arch-hints and summarize the principles to identify them. We then apply these principles to emerging Unified Memory (UM) management system and identify three new Arch-hints caused by UM's unique data movement patterns. We then develop a new extraction attack, UMProbe. We also create the first DNN benchmark suite in UM and utilize the benchmark suite to evaluate UMProbe. Our evaluation shows that UMProbe can extract the layer sequence with an accuracy of 95% for almost all victim test models, which thus calls for more attention to the DNN security in UM system.
翻译:深神经网络模型(DNN)因其在昂贵的培训工作中的独特价值、对隐私敏感的培训数据和专有网络特性而被视为机密。 因此,模型价值提高了对手盗取利润模型的动力,例如具有代表性的模型提取攻击。 新出现的攻击可以利用硬件平台中披露的对时间敏感的建筑级事件(即Arch-hints)来准确提取DNN模型层信息。 在本文件中,我们迈出第一步,找出这种“大怪怪怪”的根本原因,并总结确定它们的原则。 然后,我们将这些原则应用于新兴的统一记忆(UM)管理系统,并查明由UM独特的数据移动模式引起的三个新的“大怪”。 然后,我们开发了一个新的“新奇怪”,UMProbe。 我们还在UM中创建了第一个DNN基准套,并利用基准套来准确评估UMProbe。我们的评估表明,UMProbe可以精确地提取几乎所有受害者测试模型的层序列,其精确度为95%,因此需要更多注意UM系统DNN的安全性。