AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents. To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level. We apply this framework to construct the $\operatorname{b}^3$ benchmark, a security benchmark based on 194331 unique crowdsourced adversarial attacks. We then evaluate 31 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security. We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.
翻译:基于大语言模型(LLM)驱动的AI智能体正被大规模部署,然而我们对于骨干LLM的选择如何影响智能体安全性仍缺乏系统性理解。AI智能体的非确定性序列特性使安全建模复杂化,而传统软件与AI组件的集成又将新型LLM漏洞与传统安全风险相互交织。现有框架仅部分应对这些挑战,因其要么仅捕获特定漏洞,要么需对完整智能体进行建模。为突破这些局限,我们提出威胁快照框架:该框架能隔离智能体执行流程中LLM漏洞显现的特定状态,从而系统性地识别并分类从LLM传播至智能体层级的安全风险。我们应用该框架构建了$\\operatorname{b}^3$基准——一个基于194,331个独特众包对抗性攻击的安全基准。随后对31个主流LLM进行评估,研究发现:增强的推理能力可提升安全性,而模型规模与安全性无显著关联。我们公开了基准、数据集及评估代码,以促进LLM提供商与实践者的广泛采用,为智能体开发者提供指导,并激励模型开发者优先改进骨干模型的安全性。