The rise of AI in human contexts places new demands on systems to be transparent and explainable. We examine some anthropomorphic ideas and principles relevant to such accountablity in order to develop a theoretical framework for thinking about digital systems in complex human contexts and the problem of explaining their behaviour. Structurally, complex systems are made of modular and hierachical components, which we model abstractly using a new notion of modes and mode transitions. A mode is an independent component of the system with its own objectives, monitoring data, and algorithms. The behaviour of a mode, including its transitions to other modes, is determined by belief functions that interpret the mode's monitoring data in the light of its objectives and algorithms. We show how these belief functions can help explain system behaviour by visualising their evaluation in higher dimensional geometric spaces. These ideas are formalised by abstract and concrete simplicial complexes.
翻译:人工智能在人类环境中的崛起对系统提出了新的要求,要求系统必须透明和可以解释。我们研究了一些与这种核算相关的人类形态思想和原则,以便制定一个理论框架,用于思考复杂的人类背景下的数字系统和解释其行为的问题。从结构上讲,复杂的系统是由模块和高层次的构件组成的,我们抽象地用新的模式和模式转换概念来模拟这些元件。模式是系统的一个独立组成部分,其目标在于监测数据和算法。一种模式的行为,包括其向其他模式的过渡,是由根据模式的目标和算法解释其监测数据的信仰功能决定的。我们展示这些信仰功能如何有助于解释系统的行为,通过在更高维度的几何空间对其评价进行视觉化分析。这些概念由抽象和具体的简单复杂的复杂过程形成。