Accountability is widely understood as a goal for well governed computer systems, and is a sought-after value in many governance contexts. But how can it be achieved? Recent work on standards for governable artificial intelligence systems offers a related principle: traceability. Traceability requires establishing not only how a system worked but how it was created and for what purpose, in a way that explains why a system has particular dynamics or behaviors. It connects records of how the system was constructed and what the system did mechanically to the broader goals of governance, in a way that highlights human understanding of that mechanical operation and the decision processes underlying it. We examine the various ways in which the principle of traceability has been articulated in AI principles and other policy documents from around the world, distill from these a set of requirements on software systems driven by the principle, and systematize the technologies available to meet those requirements. From our map of requirements to supporting tools, techniques, and procedures, we identify gaps and needs separating what traceability requires from the toolbox available for practitioners. This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals and thereby improve the normative fidelity of systems and their development processes.
翻译:问责制被广泛理解为受良好管治的计算机系统的一个目标,是许多治理环境中的追求价值。但是,如何实现呢?最近关于可治理的人工智能系统标准的工作提供了相关原则:可追踪性。可追踪性要求不仅确定一个系统如何运作,而且确定其创建和目的为何,以此解释一个系统为何具有特定的动态或行为;将系统是如何建立以及该系统如何机械地完成的记录与更广泛的治理目标联系起来,以强调人类对这一机械操作及其所依据的决策过程的理解的方式,从而突出人类对这一机械操作和决定程序的理解。我们审查世界各地在AI原则和其他政策文件中阐述可追踪性原则的各种方式,从这些原则驱动的软件系统要求中提炼出一套要求,并将满足这些要求的现有技术系统系统系统化。从我们的要求地图到支持工具、技术和程序,我们找出差距,并需要将可追踪性与可供执行者使用的工具箱分开。该图将现有的关于问责制和透明度的讨论重新框架,使用可追溯性原则来说明如何、何时和为什么部署透明度来达到问责制目标,从而改进规范的系统。