This paper examines two prominent formal trade-offs in artificial intelligence (AI) -- between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent interpretation by introducing a sociotechnical approach to examining the value implications of trade-offs. Specifically, I identify three key considerations -- validity and instrumental relevance, compositionality, and dynamics -- for contextualizing and characterizing these implications. These considerations reveal that the relationship between model trade-offs and corresponding values depends on critical choices and assumptions. Crucially, judicious sacrifices in one model property for another can, in fact, promote both sets of corresponding values. The proposed sociotechnical perspective thus shows that we can and should aspire to higher epistemic and ethical possibilities than the prevalent interpretation suggests, while offering practical guidance for achieving those outcomes. Finally, I draw out the broader implications of this perspective for AI design and governance, highlighting the need to broaden normative engagement across the AI lifecycle, develop legal and auditing tools sensitive to sociotechnical considerations, and rethink the vital role and appropriate structure of interdisciplinary collaboration in fostering a responsible AI workforce.
翻译:暂无翻译