While artificial intelligence (AI) is widely applied in various areas, it is also being used maliciously. It is necessary to study and predict AI-powered attacks to prevent them in advance. Turning neural network models into stegomalware is a malicious use of AI, which utilizes the features of neural network models to hide malware while maintaining the performance of the models. However, the existing methods have a low malware embedding rate and a high impact on the model performance, making it not practical. Therefore, by analyzing the composition of the neural network models, this paper proposes new methods to embed malware in models with high capacity and no service quality degradation. We used 19 malware samples and 10 mainstream models to build 550 malware-embedded models and analyzed the models' performance on ImageNet dataset. A new evaluation method that combines the embedding rate, the model performance impact and the embedding effort is proposed to evaluate the existing methods. This paper also designs a trigger and proposes an application scenario in attack tasks combining EvilModel with WannaCry. This paper further studies the relationship between neural network models' embedding capacity and the model structure, layer and size. With the widespread application of artificial intelligence, utilizing neural networks for attacks is becoming a forwarding trend. We hope this work can provide a reference scenario for the defense of neural network-assisted attacks.
翻译:虽然人工智能(AI)在各个领域广泛应用,但也被恶意地使用。 有必要研究和预测人工智能动力攻击的新方法,以预先防止这些攻击。 将神经网络模型转换成神经软件是使用人工智能的恶意做法, 使用神经网络模型来隐藏恶意软件, 使用神经网络模型的特征来隐藏恶意软件, 并保持模型的性能。 但是, 现有方法的恶意软件嵌入率低, 对模型性能影响大, 因而不切实际。 因此, 本文通过分析神经网络模型的构成, 提出了将恶意软件嵌入能力高、 没有服务质量退化的模型的新方法。 我们用19个恶意软件样本和10个主流模型来建立550个恶意软件嵌入模型的模型模型, 分析模型在图像网络数据集上的模型性能。 一种将嵌入率、 模型性能影响和嵌入努力结合起来的评估现有方法, 本文还设计了一个触发程序, 并提出了将邪恶Model与WinaCry相结合的攻击任务的应用方案。 本文进一步研究了神经网络模型嵌入能力与模型网络之间的关系, 成为了模型攻击的动态模型的参考。 我们利用了模型的模型的模型的模型和动态的模型的模型, 提供了一个可以提供。