We consider a series of legal provocations emerging from the proposed European Union AI Act 2021 (AIA) and how they open up new possibilities for HCI in the design and development of trustworthy autonomous systems. The AIA continues the by design trend seen in recent EU regulation of emerging technologies. The AIA targets AI developments that pose risks to society and citizens fundamental rights, introducing mandatory design and development requirements for high-risk AI systems (HRAIS). These requirements regulate different stages of the AI development cycle including ensuring data quality and governance strategies, mandating testing of systems, ensuring appropriate risk management, designing for human oversight, and creating technical documentation. These requirements open up new opportunities for HCI that reach beyond established concerns with the ethics and explainability of AI and situate AI development in human-centered processes and methods of design to enable compliance with regulation and foster societal trust in AI.
翻译:我们考虑了拟议的《欧盟2021年AI法》引起的一系列法律挑衅,以及这些挑衅如何为HCI设计和开发可信赖的自主系统开辟了新的可能性。AIA继续通过欧盟最近对新兴技术的监管所看到的设计趋势。AIA针对对社会和公民基本权利构成威胁的AI发展动态,为高风险AI系统引入了强制性设计和开发要求。这些要求规范了AI发展周期的不同阶段,包括确保数据质量和治理战略、授权测试系统、确保适当的风险管理、设计人文监督以及创建技术文件。这些要求为HCI开辟了新的机会,超越了对AI道德和解释的既定关切,并将AI的发展置于以人为本的过程和设计方法中,以便能够遵守监管,并在AI中促进社会信任。