Although AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society, significant attendant risks have been identified. These risks have led to proposed regulations, litigation, and general societal concerns. As with any promising technology, organizations want to benefit from the positive capabilities of AI technology while reducing the risks. The best way to reduce risks is to implement comprehensive AI lifecycle governance where policies and procedures are described and enforced during the design, development, deployment, and monitoring of an AI system. While support for comprehensive governance is beginning to emerge, organizations often need to identify the risks of deploying an already-built model without knowledge of how it was constructed or access to its original developers. Such an assessment will quantitatively assess the risks of an existing model in a manner analogous to how a home inspector might assess the energy efficiency of an already-built home or a physician might assess overall patient health based on a battery of tests. This paper explores the concept of a quantitative AI Risk Assessment, exploring the opportunities, challenges, and potential impacts of such an approach, and discussing how it might improve AI regulations.
翻译:虽然正在越来越多地利用基于AI的系统来为组织、个人和社会提供价值,但已查明了伴随而来的重大风险,这些风险已导致拟议规章、诉讼和一般社会关切。与任何有希望的技术一样,各组织希望从AI技术的积极能力中受益,同时减少风险的最佳方式是实施AI生命周期综合治理,其中在设计、开发、部署和监测AI系统期间描述和执行政策和程序。在开始支持全面治理的同时,各组织往往需要查明部署已经建立的模式的风险,而不知道该模式是如何构建的,或无法接触最初的开发者。这种评估将对现有模式的风险进行定量评估,其方式类似于家事监察员如何评估已经建成的住宅的能源效率,或医生如何根据测试组合评估病人的总体健康状况。本文件探讨了AI风险评估的概念,探讨这种方法的机会、挑战和潜在影响,并讨论如何改进AI的监管。