Several high-profile events, such as the use of biased recidivism systems and mass testing of emotion recognition systems on vulnerable sub-populations, have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized. In this paper, I will make a case for thinking about ethical considerations not just at the level of individual models and datasets, but also at the level of AI tasks. I will present a new form of such an effort, Ethics Sheets for AI Tasks, dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation. Finally, I will provide an example ethics sheet for automatic emotion recognition. Together with Data Sheets for datasets and Model Cards for AI systems, Ethics Sheets aid in the development and deployment of responsible AI systems.
翻译:一些引人注目的事件,如使用有偏见的累犯制度和对弱势亚群体大规模测试情绪识别系统,突出了技术如何常常导致已经边缘化的人群遭受更不利的结果;在本文件中,我将说明不仅在单个模型和数据集一级,而且在AI任务一级考虑伦理因素的理由;我将介绍这种努力的新形式,即AI任务道德操守表,专门阐述一项任务通常如何框架和我们在数据、方法和评价方面所作的选择中隐藏的假设和伦理因素;最后,我将提供一份用于自动情感识别的道德表,连同用于AI系统数据集和模版卡的数据表,道德表协助开发和部署负责任的AI系统。