Recent innovations such as Datasheets for Datasets and Model Cards for Model Reporting have made useful contributions to furthering ethical research. Yet, several high-profile events, such as the mass testing of emotion recognition systems on vulnerable sub-populations, have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized. In this paper, I will make a case for thinking about ethical considerations not just at the level of individual models and datasets, but also at the level of AI tasks. I will present a new form of such an effort, Ethics Sheets for AI Tasks, dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed and in the choices we make regarding the data, method, and evaluation. Finally, I will provide an example ethics sheet for automatic emotion recognition. Ethics sheets are a mechanism to document ethical considerations \textit{before} building datasets and systems. Such pre-production activities (e.g., ethics analyses) and associated artifacts (e.g., accessible documentation) are crucial for responsible AI: for communicating risks to all stakeholders, to help decision and policy making, and for developing more effective post-production documents such as Data Sheets and Model Cards.
翻译:最近的一些创新,如数据集数据表和示范报告模型卡等,为推进道德研究作出了有益的贡献;然而,一些引人注目的活动,如对弱势亚群体群体情感识别系统进行大规模测试等,突出显示技术如何往往导致已经边缘化的群体产生更不利的结果;在本文件中,我将提出一个理由,不仅在单个模型和数据集一级,而且在AI任务一级,思考伦理考虑。我将提出一种新的形式,即AI任务中的道德操守工作表,专门阐述一项任务的通常框架和我们就数据、方法和评价所作的选择中隐藏的假设和道德考虑。最后,我将提供一份用于自动认识情绪的道德表。道德工作表是记录道德考虑的机制,不仅在单个模型和数据集和数据集一级,而且在AI任务一级。这类生产前活动(如道德分析)和相关工艺品(如无障碍文件)对于负责任的AI至关重要:向所有利益攸关方通报风险,帮助作出示范决定和制定政策,以及编制更有效的后制文件。