Intelligent information systems that contain emergent elements often encounter trust problems because results do not get sufficiently explained and the procedure itself can not be fully retraced. This is caused by a control flow depending either on stochastic elements or on the structure and relevance of the input data. Trust in such algorithms can be established by letting users interact with the system so that they can explore results and find patterns that can be compared with their expected solution. Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns and may increase the trust that a user has in the solution. If expectations are not met, close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected. By either accepting or rejecting a solution, the user's set of expectations evolves and a learning process for the users is established. In this paper we present a conceptual framework that reflects and supports this process. The framework is the result of an analysis of two exemplary case studies from two different disciplines with information systems that assist experts in their complex tasks.
翻译:包含突发要素的智能信息系统往往会遇到信任问题,因为结果没有得到充分解释,程序本身无法完全收回,这是由控制流动造成的,取决于随机性要素或输入数据的结构和相关性。这种算法可以通过让用户与系统互动来建立信任,以便他们能够探索结果并找到与预期解决办法相比较的模式。反映人类对一个域相对于算法结果的理解的特征和模式,可以使人们对这种模式产生认识,并可能提高用户对解决办法的信任。如果期望得不到实现,可以进行密切检查,以确定解决办法是否符合期望,或者是否超出预期。通过接受或拒绝解决办法,用户的一套期望会演变,并为用户建立一个学习过程。在这份文件中,我们提出了一个概念框架,反映和支持这一进程。框架是分析两个不同学科的两个典型案例研究的结果,这两个学科的信息系统协助专家完成复杂的任务。