Autonomous systems control many tasks in our daily lives. To increase trust in those systems and safety of the interaction between humans and autonomous systems, the system behaviour and reasons for autonomous decision should be explained to users, experts and public authorities. One way to provide such explanations is to use behavioural models to generate context- and user-specific explanations at run-time. However, this comes at the cost of higher modelling effort as additional models need to be constructed. In this paper, we propose a high-level process to extract such explanation models from system models, and to subsequently refine these towards specific users, explanation purposes and situations. By this, we enable the reuse of specification models for integrating self-explanation capabilities into systems. We showcase our approach using a running example from the autonomous driving domain.
翻译:自主系统控制我们日常生活中的许多任务。为了提高对这些系统的信任以及人类和自主系统之间互动的安全性,应向用户、专家和公共当局解释系统行为和自主决定的理由。提供这种解释的一种方式是使用行为模式在运行时产生针对背景和用户的解释。然而,这要以更高的建模努力为代价,因为需要建立更多的模型。在本文件中,我们提议了一个高级别程序,从系统模型中提取这种解释模型,随后将这些模型改进给具体的用户、解释目的和情况。通过这个方法,我们得以重新使用规格模型,将自我规划能力纳入系统。我们用自主驱动领域的运行范例展示了我们的方法。