The Endless Tuning is a design method for a reliable deployment of artificial intelligence based on a double mirroring process, which pursues both the goals of avoiding human replacement and filling the so-called responsibility gap (Matthias 2004). Originally depicted in (Fabris et al. 2024) and ensuing the relational approach urged therein, it was then actualized in a protocol, implemented in three prototypical applications regarding decision-making processes (respectively: loan granting, pneumonia diagnosis, and art style recognition) and tested with such as many domain experts. Step by step illustrating the protocol, giving insights concretely showing a different voice (Gilligan 1993) in the ethics of artificial intelligence, a philosophical account of technical choices (e.g., a reversed and hermeneutic deployment of XAI algorithms) will be provided in the present study together with the results of the experiments, focusing on user experience rather than statistical accuracy. Even thoroughly employing deep learning models, full control was perceived by the interviewees in the decision-making setting, while it appeared that a bridge can be built between accountability and liability in case of damage.
翻译:无尽调优是一种基于双重镜像过程的人工智能可靠部署设计方法,旨在同时实现避免人类被取代和填补所谓的责任鸿沟(Matthias 2004)的双重目标。该方法最初在(Fabris等人 2024)中提出,并遵循其中倡导的关系型研究路径,随后被具体化为一种协议,并在三个涉及决策过程的原型应用中实施(分别为:贷款审批、肺炎诊断和艺术风格识别),并由相应领域的专家进行了测试。本研究将逐步阐述该协议,通过具体案例展示人工智能伦理中一种不同的声音(Gilligan 1993),并提供技术选择(例如可解释AI算法的逆向诠释式部署)的哲学论述,同时呈现实验结果——重点关注用户体验而非统计精度。尽管深度运用了深度学习模型,受访者在决策环境中仍感知到完全的控制力,且研究表明在发生损害时可在问责与法律责任之间建立桥梁。