Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.
翻译:许多有关负责任AI的道德原则推出,以解决对AI / ML系统误用和滥用的担忧。这些原则的基本方面包括隐私,准确性,公正性,强壮性,可解释性和透明性。然而,这些方面之间存在可能引发AI / ML开发者遵循这些原则困难的潜在张力。例如,增加AI / ML系统的准确性可能会降低其可解释性。作为将原则运用于实践的持续努力的一部分,本文收集并讨论了10个值得注意的张力,权衡和其他道德方面之间的相互作用目录。我们主要关注双向交互作用,并借助在广泛的文献中提供的支持来绘制。这个目录可以帮助提高人们对道德原则方面之间可能的相互作用的认识,同时促进设计和开发AI / ML系统的设计者和开发人员作出有支持的判断。