Potential advancements in artificial intelligence (AI) could have profound implications for how countries research and develop weapons systems, and how militaries deploy those systems on the battlefield. The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems, while others have argued that AI may be too diffuse to control. This paper argues that while a ban on all military applications of AI is likely infeasible, there may be specific cases where arms control is possible. Throughout history, the international community has attempted to ban or regulate weapons or military systems for a variety of reasons. This paper analyzes both successes and failures and offers several criteria that seem to influence why arms control works in some cases and not others. We argue that success or failure depends on the desirability (i.e., a weapon's military value versus its perceived horribleness) and feasibility (i.e., sociopolitical factors that influence its success) of arms control. Based on these criteria, and the historical record of past attempts at arms control, we analyze the potential for AI arms control in the future and offer recommendations for what policymakers can do today.
翻译:人工智能(AI)的潜在进展可能对各国如何研发武器系统以及军队如何在战场上部署武器系统产生深远影响。人工智能支持的军事系统的想法促使一些活动分子呼吁限制或禁止某些武器系统,而另一些人则认为,人工智能可能过于分散,无法控制。本文认为,虽然禁止人工智能的所有军事应用可能不可行,但可能存在军备控制可能实现的具体案例。在整个历史过程中,国际社会出于各种原因,试图禁止或管制武器或军事系统。本文件分析了成功与失败,并提出了似乎影响军备控制在某些情况下起作用而不是在其他情况下起作用的若干标准。我们认为,成功与失败取决于军备控制的可取性(即武器的军事价值与其所认为的恐怖性相比)和可行性(即影响其成功的社会政治因素)。根据这些标准和以往军备控制尝试的历史记录,我们分析了未来国际军备控制的潜力,并为当今决策者提供了建议。