Cyber-defense systems are being developed to automatically ingest Cyber Threat Intelligence (CTI) that contains semi-structured data and/or text to populate knowledge graphs. A potential risk is that fake CTI can be generated and spread through Open-Source Intelligence (OSINT) communities or on the Web to effect a data poisoning attack on these systems. Adversaries can use fake CTI examples as training input to subvert cyber defense systems, forcing the model to learn incorrect inputs to serve their malicious needs. In this paper, we automatically generate fake CTI text descriptions using transformers. We show that given an initial prompt sentence, a public language model like GPT-2 with fine-tuning, can generate plausible CTI text with the ability of corrupting cyber-defense systems. We utilize the generated fake CTI text to perform a data poisoning attack on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The poisoning attack introduced adverse impacts such as returning incorrect reasoning outputs, representation poisoning, and corruption of other dependent AI-based cyber defense systems. We evaluate with traditional approaches and conduct a human evaluation study with cybersecurity professionals and threat hunters. Based on the study, professional threat hunters were equally likely to consider our fake generated CTI as true.
翻译:正在开发网络防御系统,以自动吸收含有半结构数据和/或文字的网络威胁情报(CTI)的半结构数据和(或)文字以填充知识图表。潜在的风险是,可以通过开放源码情报(OSINT)社区或网络生成和传播假的CTI,以对系统进行数据中毒袭击。对立可以使用假的CTI案例作为培训投入,以颠覆网络防御系统,迫使模型学习不正确的输入,以满足其恶意需要。在本文中,我们用变压器自动生成假的CTI文本描述。我们用最初的即时判决显示,像GPT-2这样的公共语言模型可以生成具有腐蚀网络防御系统能力的可信的CTI文本。我们利用生成的伪造的CTI文本对网络安全知识图(CKG)和网络安全保护系统进行数据中毒袭击。中毒袭击带来了不利影响,如返回错误的推理结果、代表中毒和其他依赖的AI的网络防御系统腐败。我们用传统方法来评估,并与网络安全专业人员和威胁猎人进行人类评估研究。根据研究,我们所创造的专业威胁猎人可能假冒风险猎人。