Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to be used to automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user, and the resources needed to develop them. We also contend that some restrictions on non-AI capabilities needed to cause harm will be required. Though capability restrictions risk reducing use more than misuse (facing an unfavorable Misuse-Use Tradeoff), we argue that interventions on capabilities are warranted when other interventions are insufficient, the potential harm from misuse is high, and there are targeted ways to intervene on capabilities. We provide a taxonomy of interventions that can reduce AI misuse, focusing on the specific steps required for a misuse to cause harm (the Misuse Chain), and a framework to determine if an intervention is warranted. We apply this reasoning to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.
翻译:人工智能(AI)系统在其功能不断增强时将越来越多地被用于造成伤害。实际上,AI系统已经开始被用于自动化欺诈行为、侵犯人权、创建有害的伪造图像和识别危险毒素。为了防止一些AI的滥用,我们认为有针对性地对某些功能进行干预是必要的。这些限制可能包括控制谁可以访问某些类型的AI模型,它们可以用于什么,输出是否经过过滤或可以追溯到其用户,以及开发它们所需要的资源。我们还认为需要对用于造成伤害的某些非AI功能进行一些限制。虽然功能限制的风险是降低使用而不是滥用(面临不利的滥用与使用权衡),但我们认为当其他干预措施不足、滥用可能造成的损害很大且有有针对性的干预手段时,对功能进行干预是必要的。我们提供了一种减少AI滥用的干预措施分类法,重点关注滥用导致损害所需的具体步骤(滥用链),以及确定是否需要干预的框架。我们将这种推理应用于三个示例:预测新型毒素、创建有害的图像和自动化鱼叉式网络钓鱼运动。