Neural network (NN) algorithms have become the dominant tool in visual object recognition, natural language processing, and robotics. To enhance the computational efficiency of these algorithms, in comparison to the traditional von Neuman computing architectures, researchers have been focusing on memristor computing systems. A major drawback when using memristor computing systems today is that, in the artificial intelligence (AI) era, well-trained NN models are intellectual property and, when loaded in the memristor computing systems, face theft threats, especially when running in edge devices. An adversary may steal the well-trained NN models through advanced attacks such as learning attacks and side-channel analysis. In this paper, we review different security techniques for protecting memristor computing systems. Two threat models are described based on their assumptions regarding the adversary's capabilities: a black-box (BB) model and a white-box (WB) model. We categorize the existing security techniques into five classes in the context of these threat models: thwarting learning attacks (BB), thwarting side-channel attacks (BB), NN model encryption (WB), NN weight transformation (WB), and fingerprint embedding (WB). We also present a cross-comparison of the limitations of the security techniques. This paper could serve as an aid when designing secure memristor computing systems.
翻译:神经网络(NN)算法已成为视觉物体识别、自然语言处理和机器人的主导工具。为了提高这些算法的计算效率,与传统的冯纽曼计算结构相比,研究人员一直侧重于模拟计算系统。今天,使用记忆计算系统的一个主要缺点是,在人工智能(AI)时代,训练有素的NNM模型是知识产权,在装入分子计算系统时,面临盗窃威胁,特别是在边缘装置运行时。对手可以通过学习攻击和侧屏道分析等高级攻击,偷取经过良好训练的NNT模型。在本文中,我们审查了保护Memristor计算系统的不同安全技术。根据对对手能力的假设,描述了两种威胁模型:黑箱模型和白箱模型。我们将这些威胁模型中的现有安全技术分为五类:挫败学习攻击(BB)、挫败侧气攻击(BB)、NNW模型加密(WB)、NN体重转换(W)、当前安全系统安全限制(WB)的交叉配置。