Trust is an important aspect of human life. It provides instrumental value in allowing us to collaborate on and defer actions to others, and intrinsic value in our intimate relationships with romantic partners, family, and friends. In this paper I examine the nature of trust from a philosophical perspective. Specifically I propose to view trust as a context-sensitive state in a manner that will be made precise. The contribution of this paper is threefold. First, I make the simple observation that an individual's trust is typically both action- and context-sensitive. Action-sensitivity means that trust may obtain between a given truster and trustee for only certain actions. Context-sensitivity means that trust may obtain between a given truster and trustee, regarding the same action, in some conditions and not others. I also opine about what kinds of things may play the role of the truster, trustee, and action. Second, I advance a theory for the nature of contextual trust. I propose that the answer to "What does it mean for $A$ to trust $B$ to do $X$ in context $C$?" has two conditions. First, $A$ must take $B$'s doing $X$ as a means towards one of $A$'s ends. Second, $A$ must adopt an unquestioning attitude concerning $B$'s doing $X$ in context $C$. This unquestioning attitude is similar to the attitude introduced in Nguyen 2021. Finally, we explore how contextual trust can help us make sense of trust in general non-interpersonal settings, notably that of artificial intelligence (AI) systems. The field of Explainable Artificial Intelligence (XAI) assigns paramount importance to the problem of user trust in opaque computational models, yet does little to give trust diagnostic or even conceptual criteria. I propose that contextual trust is a natural fit for the task by illustrating that model transparency and explainability map nicely into our construction of the contexts $C$.
翻译:信任是人类生活中的重要方面。它在允许我们协作和推迟行动以及在我们与恋人、家人和朋友的亲密关系中提供内在价值方面具有工具价值。在本文中,我从哲学的角度探讨了信任的本质。具体而言,我建议将信任视为一种上下文敏感的状态,这种状态将得到明确的界定。本文的贡献有三个方面。首先,我提出了一个简单的观察,即个体的信任通常同时具有行动和环境敏感性。行动敏感性意味着信任可能仅限于某些行动。环境敏感性意味着信任可能在某些条件下存在,而在其他条件下不存在,即针对相同的行动,对于给定的信任者和受托者,信任可能存在或不存在。我还阐述了可能担任信任者、受托者和行动角色的事物类型。其次,我提出了一种上下文信任的本质理论。我建议回答“A信任B在C环境中做X是什么意思?”这个问题有两个条件。首先,A必须将B的做X作为实现A的目的之一。其次,A必须对B在C环境中做X采取无疑问的态度。这种无疑问的态度类似于Nguyen 2021中介绍的态度。最后,我们探讨了上下文信任如何帮助我们理解一般的非人际关系环境中的信任,特别是人工智能系统领域。可解释的人工智能(XAI)领域赋予用户对不透明的计算模型的信任问题极端重要性,但却很少提供信任诊断或概念标准。我建议,通过阐述模型透明度和可解释性如何很好地映射到上下文C,上下文信任自然适用于这个任务。