In the current era, people and society have grown increasingly reliant on Artificial Intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, health care, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great efforts of designing more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI's indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation.
翻译:在当今时代,人们和社会越来越依赖人工智能技术。大赦国际具有推动我们走向全人类繁荣的未来的潜力,它也具有巨大的压迫和灾难风险。关于我们是否应(重新)信任大赦国际的讨论近年来一再出现,在许多方面,包括工业、学术界、保健、服务等,也反复出现。技术学家和大赦国际研究人员有责任开发可靠的人工智能系统。他们作出了巨大的努力,设计了更负责任的人工智能算法。然而,现有的技术解决办法范围狭窄,主要针对评分或分类任务的算法,强调公平和不可取的偏见。为了在大赦国际和人类之间建立长期信任,我们认为关键在于超越算法公正,将可能使大赦国际行为不相干的主要方面联系起来。在这次调查中,我们提供了一个对社会负责的人工智能算法的系统框架,目的是研究大赦国际的冷漠和对社会负责的算法问题,确定目标,并介绍我们可能实现这些目标的手段。我们进一步讨论如何利用这一社会框架来改进预防工作。我们如何利用这一框架来改进社会信息。