Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
翻译:人工智能(AI)正迅速被纳入军事指挥和控制(C2)系统,作为许多国防部队的战略优先事项。人工智能的成功实施预示着通过自动化在C2敏感度方面将出现显著的飞跃。然而,需要对大赦国际在可预见的未来能够实现的目标设定现实的期望。本文件将辩称,人工智能可能导致一个脆弱陷阱,据此,将C2职能下放给AI可能会增加C2的脆弱性,从而导致灾难性的战略失败。这要求在C2中为AI建立一个新的框架,以避免这一陷阱。我们将认为,具有灵活性的抗弱点和灵活性应构成由AI支持的C2系统的核心设计原则。这种双重性被称为“Agile ” 、“反弱点 ” 、“AI- Enable” 指挥和控制(A3IC2)。 A3IC2系统通过C2决策周期的反馈过度补偿,不断提高其面对冲击和意外的能力。A3IC2系统不仅能够在复杂的操作环境中生存,而且还将得益于不可避免的战争冲击和动荡。