Artificial Intelligence (AI) is increasingly used in critical applications. Thus, the need for dependable AI systems is rapidly growing. In 2018, the European Commission appointed experts to a High-Level Expert Group on AI (AI-HLEG). AI-HLEG defined Trustworthy AI as 1) lawful, 2) ethical, and 3) robust and specified seven corresponding key requirements. To help development organizations, AI-HLEG recently published the Assessment List for Trustworthy AI (ALTAI). We present an illustrative case study from applying ALTAI to an ongoing development project of an Advanced Driver-Assistance System (ADAS) that relies on Machine Learning (ML). Our experience shows that ALTAI is largely applicable to ADAS development, but specific parts related to human agency and transparency can be disregarded. Moreover, bigger questions related to societal and environmental impact cannot be tackled by an ADAS supplier in isolation. We present how we plan to develop the ADAS to ensure ALTAI-compliance. Finally, we provide three recommendations for the next revision of ALTAI, i.e., life-cycle variants, domain-specific adaptations, and removed redundancy.
翻译:2018年,欧盟委员会任命专家参加一个AI(AI-HLEG)高级别专家组(AI-HLEG),AI-HLEG将可信赖的AI定义为(1)合法,(2)道德,(3)有力和具体规定的七项相应关键要求。为了帮助各发展组织,AI-HLEG最近公布了《可信赖的AI(ALTAI)评估清单》。我们从将ALTAI应用到一个依靠机器学习的高级助推系统(ADAS)的开发项目中,提出了一个说明性案例研究。我们的经验表明,ALTAI基本上适用于ADAS的开发,但可以忽略与人力机构和透明度有关的具体部分。此外,与社会和环境影响有关的更大问题不能由ADAS供应商孤立地处理。我们介绍我们计划如何制定ADAS,以确保ALTAI(ALTAI)的合规性。最后,我们为ALTAI的下一次修订提出三项建议,即寿命周期变式、具体领域调整和去除冗余性。