Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research. Consequently, stakeholders often talk past each other, with policymakers expressing vague demands and practitioners devising solutions that may not address the underlying concerns. Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work. We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest. We conduct a broad literature survey, identifying many clusters of similar conceptions of transparency, tying each back to our north star with analysis of how it furthers or hinders our ideal AI transparency goals. We conclude with a discussion on common threads across all the clusters, to provide clearer common language whereby policymakers, stakeholders, and practitioners can communicate concrete demands and deliver appropriate solutions. We hope for future work on AI transparency that further advances confident, user-beneficial goals and provides clarity to regulators and developers alike.
翻译:尽管人们普遍呼吁建立透明的人工智能系统,但这个词过于负担过重,含义各异,无法表达准确的政策目标或引导具体的研究方针。因此,利益攸关方经常相互谈论,决策者往往表达含糊不清的要求,从业者提出可能无法解决根本关切的解决办法。 造成这种情况的部分原因是,在这项工作中,一个清晰的AI透明度理想没有被提及。我们明确命名为这样一个北星 -- -- 以用户为中心的透明、用户适当和诚实。我们进行了广泛的文献调查,确定了许多类似的透明度概念组,将每个人与我们的北星连接起来,分析它如何推进或阻碍我们理想的AI透明度目标。我们最后就各组的共同思路展开讨论,以提供更明确的通用语言,使决策者、利益攸关方和从业者能够交流具体要求并提供适当的解决方案。我们希望今后在AI方面开展的工作能够进一步提高信心、用户受益目标,并为监管者和开发者提供清晰度。</s>