To promote secure and private artificial intelligence (SPAI), we review studies on the model security and data privacy of DNNs. Model security allows system to behave as intended without being affected by malicious external influences that can compromise its integrity and efficiency. Security attacks can be divided based on when they occur: if an attack occurs during training, it is known as a poisoning attack, and if it occurs during inference (after training) it is termed an evasion attack. Poisoning attacks compromise the training process by corrupting the data with malicious examples, while evasion attacks use adversarial examples to disrupt entire classification process. Defenses proposed against such attacks include techniques to recognize and remove malicious data, train a model to be insensitive to such data, and mask the model's structure and parameters to render attacks more challenging to implement. Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications. To maintain data privacy, several solutions that combine existing data-privacy techniques have been proposed, including differential privacy and modern cryptography techniques. In this paper, we describe the notions of some of methods, e.g., homomorphic encryption, and review their advantages and challenges when implemented in deep-learning models.
翻译:为了促进安全和私人人工智能,我们审查关于DNNs的安全模式和数据隐私的研究。示范安全允许系统在不受到损害其完整性和效率的恶意外部影响的情况下按预期行事;安全攻击可按发生时的情况进行划分:如果攻击发生在培训期间,则称为中毒攻击,如果发生在(培训后)推论期间,则称为逃避攻击。有毒攻击通过以恶意实例腐蚀数据而损害培训过程,而规避攻击则利用对抗性例子破坏整个分类过程。针对这类攻击提出的辩护包括识别和消除恶意数据的技术,训练对此类数据不敏感的模型,并掩盖模型的结构和参数,使攻击更具执行挑战性。此外,模型培训所涉数据的隐私也受到攻击的威胁,例如模型反射攻击,或大赦国际应用程序的不诚实服务提供者的威胁。为了维护数据隐私,提出了将现有数据主要技术相结合的若干解决办法,包括不同的隐私和现代加密技术。在本文件中,我们描述了在进行深层加密时采用的方法的优势和学习概念,例如:在深层加密时,我们描述了这些方法的优势。