Responsible AI must be able to make decisions that consider human values and can be justified by human morals. Operationalising normative ethical principles inferred from philosophy supports responsible reasoning. We survey computer science literature and develop a taxonomy of 23 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in responsible AI systems.
翻译:暂无翻译