Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing protections for users, or toward catalyzing innovation? Specific answers are less significant than the larger demonstration that AI ethics is currently unbalanced toward theoretical principles, and will benefit from increased exposure to grounded practices and dilemmas.
翻译:最近,大赦国际的道德规范侧重于将抽象原则下至实践,本文朝另一个方向发展,从从事有形人类问题的大赦国际设计者的活生生的经验中产生道德见解,然后向上循环,影响围绕这些问题的理论辩论:(1) 是否应当通过解释或准确表现寻求作为可信赖的大赦国际?(2) 是否应当认为大赦国际是可信的,还是可靠性是更可取的目标?(3) 大赦国际的道德规范应当着眼于为用户提供保护,还是着眼于催化创新?具体答案不如目前大赦国际道德规范不平衡于理论原则的更大证明重要,而且如果更多地接触有根有据的做法和困境,将从中受益。