Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI. This requires answering challenging empirical questions about current and potential risks and benefits of AI: assessing impacts that are often widely distributed and indirect, and making predictions about a highly uncertain future. It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging. Though different groups may agree on high-level principles that uses of AI should respect (e.g., privacy, fairness, and autonomy), challenges arise when putting these principles into practice. For example, it is straightforward to say that AI systems must protect individual privacy, but there is presumably some amount or type of privacy that most people would be willing to give up to develop life-saving medical treatments. Despite these challenges, research can and has made progress on these questions. The aim of this chapter will be to give readers an understanding of this progress, and of the challenges that remain.
翻译:人工智能管理的作用是,最终采取切实步骤,减轻这种伤害风险,同时使创新能够从大赦国际中受益。这要求回答关于大赦国际当前和潜在风险和好处的富有挑战性的经验性问题:评估往往广泛和间接的影响,预测一个非常不确定的未来。还需要通过规范问题思考在社会中如何使用大赦国际,这同样具有挑战性。尽管不同群体可能就使用大赦国际应该尊重的高级别原则达成一致(例如隐私、公平和自主),但在将这些原则付诸实践时将出现挑战。例如,直截了当地说,人工智能系统必须保护个人隐私,但估计有一定数量或类型的隐私愿意放弃发展拯救生命的医学治疗。尽管存在这些挑战,研究能够并且已经在这些问题上取得了进展。本章的目的将是让读者了解这一进展。