Future AI agents might run autonomously with elevated privileges. If these agents are misaligned, they might abuse these privileges to cause serious damage. The field of AI control develops techniques that make it harder for misaligned AIs to cause such damage, while preserving their usefulness. We introduce BashArena, a setting for studying AI control techniques in security-critical environments. BashArena contains 637 Linux system administration and infrastructure engineering tasks in complex, realistic environments, along with four sabotage objectives (execute malware, exfiltrate secrets, escalate privileges, and disable firewall) for a red team to target. We evaluate multiple frontier LLMs on their ability to complete tasks, perform sabotage undetected, and detect sabotage attempts. Claude Sonnet 4.5 successfully executes sabotage while evading monitoring by GPT-4.1 mini 26% of the time, at 4% trajectory-wise FPR. Our findings provide a baseline for designing more effective control protocols in BashArena. We release the dataset as a ControlArena setting and share our task generation pipeline.
翻译:未来的AI智能体可能以提升的权限自主运行。若这些智能体未对齐,它们可能滥用权限造成严重损害。AI控制领域致力于开发技术,在保持智能体实用性的同时,增加未对齐AI造成损害的难度。本文介绍BashArena,一个用于研究安全关键环境中AI控制技术的实验平台。BashArena包含637项复杂现实环境中的Linux系统管理与基础设施工程任务,并为红队设定了四项破坏目标(执行恶意软件、窃取机密、提升权限、禁用防火墙)。我们评估了多个前沿大语言模型在完成任务、实施隐蔽破坏及检测破坏尝试方面的能力。Claude Sonnet 4.5在轨迹级假阳性率为4%的条件下,成功执行破坏并规避GPT-4.1 mini监控的概率达26%。本研究为在BashArena中设计更有效的控制协议提供了基准。我们将数据集作为ControlArena环境开源,并共享任务生成流程。