Algorithms and other formal models that incorporate human values like fairness have grown increasingly popular in computer science. In response to sociotechnical challenges in the use of these models, designers and researchers have taken widely divergent positions on the use of formal models of human values: encouraging their use, moving away from them, or ignoring the normative consequences altogether. In this paper, we seek to resolve these divergent positions by identifying the main conceptual limits of formal modeling, and develop four reflexive values-value fidelity, accuracy, value legibility, and value contestation-vital for incorporating human values into formal models. We then provide a methodology for reflexively designing formal models incorporating human values without ignoring their societal implications.
翻译:将公平等人类价值观纳入其中的演算法和其他正式模型在计算机科学中越来越受欢迎。为了应对这些模型使用过程中的社会技术挑战,设计师和研究人员在使用人类价值观正规模型方面采取了截然不同的立场:鼓励使用这些模型,远离这些模型,或者完全忽视规范后果。 在本文件中,我们寻求通过确定正规模型的主要概念限制来解决这些分歧立场,并开发四种反向价值-价值-忠诚、准确性、价值可辨性和价值辩论-关键性,以便将人类价值观纳入正规模型。然后我们提供了一种方法,用于在不忽视人类价值观的社会影响的情况下反动设计纳入人类价值观的正式模型。