Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors. Agentic AI systems are increasingly deployed across industries, organizations, and critical sectors such as cybersecurity, finance, and healthcare. However, their autonomy introduces unique security challenges, including unauthorized actions, adversarial manipulation, and dynamic environmental interactions. Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI. This research develops a lifecycle-aware security framework specifically designed for agentic AI systems using the Design Science Research (DSR) methodology. The paper introduces MAAIS, an agentic security framework, and the agentic AI CIAA (Confidentiality, Integrity, Availability, and Accountability) concept. MAAIS integrates multiple defense layers to maintain CIAA across the AI lifecycle. Framework validation is conducted by mapping with the established MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) AI tactics. The study contributes a structured, standardized, and framework-based approach for the secure deployment and governance of agentic AI in enterprise environments. This framework is intended for enterprise CISOs, security, AI platform, and engineering teams and offers a detailed step-by-step approach to securing agentic AI workloads.
翻译:保障智能体人工智能(AI)系统的安全需要应对由自主性、决策性和适应性行为引入的复杂网络风险。智能体AI系统正日益广泛地部署于各行业、组织及关键领域,如网络安全、金融和医疗保健。然而,其自主性带来了独特的安全挑战,包括未经授权的行为、对抗性操纵以及动态环境交互。现有的AI安全框架未能充分应对这些挑战或智能体AI的特殊复杂性。本研究采用设计科学研究方法,专门为智能体AI系统开发了一个生命周期感知的安全框架。本文介绍了智能体安全框架MAAIS以及智能体AI的CIAA(机密性、完整性、可用性和可问责性)概念。MAAIS集成了多层防御机制,以在整个AI生命周期中维护CIAA。通过与既有的MITRE ATLAS(人工智能系统对抗性威胁态势)AI战术进行映射,完成了框架的验证。本研究为企业环境中智能体AI的安全部署与治理提供了一种结构化、标准化且基于框架的方法。该框架面向企业首席信息安全官、安全团队、AI平台团队及工程团队,并提供了保障智能体AI工作负载安全的详细分步方法。