In the current era, people and society have grown increasingly reliant on artificial intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, health care, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great effort to design more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI's indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation.
翻译:在当今时代,人们和社会越来越依赖人工智能技术。大赦国际具有推动我们走向人类繁荣的未来的潜力,它也具有巨大的压迫和灾难风险。关于我们是否应(重新)信任AI的讨论近年来一再出现,在很多方面,包括工业、学术界、保健、服务等,也反复出现。技术学家和大赦国际研究人员有责任开发可靠的人工智能系统。他们作出了巨大的努力,设计了更负责任的人工智能算法。然而,现有的技术解决办法范围狭窄,主要针对评分或分类任务的算法,强调公平和不受欢迎的偏见。为了在AI和人类之间建立长期信任,我们争论的关键是超越算法公正,将AI的主要方面联系起来,这可能导致AI的冷漠行为。在这次调查中,我们提供了一个对社会负责的人工智能算法系统框架,目的是研究大赦国际的漠视问题和对社会负责的人工智能算法的需要,界定目标,并介绍我们可能实现这些目标的手段。我们进一步讨论如何利用这一框架来改善社会防护。我们如何通过这一框架来改善社会意识。