Digital Twins (DT) are essentially dynamic data-driven models that serve as real-time symbiotic "virtual replicas" of real-world systems. DT can leverage fundamentals of Dynamic Data-Driven Applications Systems (DDDAS) bidirectional symbiotic sensing feedback loops for its continuous updates. Sensing loops can consequently steer measurement, analysis and reconfiguration aimed at more accurate modelling and analysis in DT. The reconfiguration decisions can be autonomous or interactive, keeping human-in-the-loop. The trustworthiness of these decisions can be hindered by inadequate explainability of the rationale, and utility gained in implementing the decision for the given situation among alternatives. Additionally, different decision-making algorithms and models have varying complexity, quality and can result in different utility gained for the model. The inadequacy of explainability can limit the extent to which humans can evaluate the decisions, often leading to updates which are unfit for the given situation, erroneous, compromising the overall accuracy of the model. The novel contribution of this paper is an approach to harnessing explainability in human-in-the-loop DDDAS and DT systems, leveraging bidirectional symbiotic sensing feedback. The approach utilises interpretable machine learning and goal modelling to explainability, and considers trade-off analysis of utility gained. We use examples from smart warehousing to demonstrate the approach.
翻译:数字双胞胎(DT)本质上是动态数据驱动模型,可以作为现实世界系统实时共生的“虚拟复制品”的动态数据驱动模型。DT可以利用动态数据驱动应用系统(DDDAS)双向共生感反馈环的基本原理不断更新。遥感环路因此可以指导测量、分析和重组,目的是在DT进行更准确的建模和分析。重新配置决定可以是自主的或互动的,可以保持模型的总体准确性。这些决定的可信度可能因在为特定情况执行决定的过程中获得的理由和效用解释不充分而受到阻碍。此外,不同的决策算法和模型具有不同的复杂性、质量,并可能导致模型获得不同的实用性。解释不足可能会限制人类能够评价决定的程度,往往导致更新与特定情况不相适应、错误、损害模型的整体准确性。本文的新贡献是利用人类在人类内部的易读取性解释方法以及在执行特定情况下获得的效用。从DDDDDASS和智能数据传输方法中获取的可解释性解释性,我们通过双向模型分析,我们从可理解的模型和可理解性分析的角度来解释。