Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI), but extant literature is frustratingly vague regarding this 'trust'. The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting. But is it? Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact). We develop an ontology that encapsulates these key axes of difference to a) illuminate seeming inconsistencies across the literature and b) more effectively manage a dizzying number of TAI considerations. We then reflect this ontology through a corpus of publications exploring fairness, accountability, and transparency to examine the variety of ways that TAI is considered within and between these approaches to promoting trust.
翻译:促进公平、问责和透明度的努力被认为对于促进AI(TAI)的信任至关重要,但现有文献对这一“信任”的“信任”却含糊不清,令人沮丧。信任本身缺乏说明,表明信任被普遍理解、不复杂、甚至不感兴趣。但是,我们是否这样做?我们对TAI出版物的分析显示,从谁在(代理人)、什么(目标)、什么(目标、什么(目标)、什么(影响)的角度出发,存在着许多不同方向。我们开发了一种本体,将这些分歧的关键轴囊括到(a) 表明各种文献之间似乎有不一致之处,(b) 更有效地管理一些令人头晕的TAI考虑因素。 然后,我们通过一系列探讨公平、问责和透明度的出版物来反映这一点,以审查TAI在这些促进信任的方法中和这些方法之间所考虑的各种方式。