This paper introduces the Agentic AI Governance Assurance & Trust Engine (AAGATE), a Kubernetes-native control plane designed to address the unique security and governance challenges posed by autonomous, language-model-driven agents in production. Recognizing the limitations of traditional Application Security (AppSec) tooling for improvisational, machine-speed systems, AAGATE operationalizes the NIST AI Risk Management Framework (AI RMF). It integrates specialized security frameworks for each RMF function: the Agentic AI Threat Modeling MAESTRO framework for Map, a hybrid of OWASP's AIVSS and SEI's SSVC for Measure, and the Cloud Security Alliance's Agentic AI Red Teaming Guide for Manage. By incorporating a zero-trust service mesh, an explainable policy engine, behavioral analytics, and decentralized accountability hooks, AAGATE provides a continuous, verifiable governance solution for agentic AI, enabling safe, accountable, and scalable deployment. The framework is further extended with DIRF for digital identity rights, LPCI defenses for logic-layer injection, and QSAF monitors for cognitive degradation, ensuring governance spans systemic, adversarial, and ethical risks.
翻译:本文介绍了智能体AI治理保障与信任引擎(AAGATE),这是一个基于Kubernetes的控制平面,旨在应对生产环境中由自主、语言模型驱动的智能体所带来的独特安全与治理挑战。认识到传统应用安全工具在应对即兴、机器速度系统方面的局限性,AAGATE将NIST AI风险管理框架(AI RMF)进行了操作化实施。它集成了针对每个RMF功能的专门安全框架:用于Map功能的智能体AI威胁建模MAESTRO框架,用于Measure功能的OWASP AIVSS与SEI SSVC混合框架,以及用于Manage功能的云安全联盟智能体AI红队测试指南。通过整合零信任服务网格、可解释策略引擎、行为分析和去中心化问责钩子,AAGATE为智能体AI提供了持续可验证的治理解决方案,实现了安全、可问责且可扩展的部署。该框架进一步扩展了用于数字身份权利的DIRF、用于逻辑层注入防御的LPCI以及用于认知退化监测的QSAF,确保治理覆盖系统性、对抗性和伦理风险。