Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.
翻译:许多有关负责AI的伦理学原则为缓解对AI / ML系统的误用和滥用提供了可能的解决方案。这些原则的潜在方面包括隐私、准确性、公平性、鲁棒性、可解释性和透明度。然而,这些方面之间可能存在潜在的张力,这给寻求遵循这些原则的AI / ML开发人员带来了困难。例如,提高AI / ML系统的准确性可能会降低其可解释性。作为将这些原则落实为实践的持续努力的一部分,在本研究中,我们编制并讨论了10个值得注意的张力、权衡和其他伦理学方面之间的交互的目录。我们主要关注双边交互,汇编了来自各种文献的支持。这个目录可以帮助意识到伦理学原则之间可能存在的交互作用,以及促进AI / ML系统的设计者和开发人员做出有力的判断。