The steadily increasing level of automation in human-centred systems demands rigorous design methods for analysing and controlling interactions between humans and automated components, especially in safety-critical applications. The variability of human behaviour poses particular challenges for formal verification and synthesis. We present a model-based framework that enables design-time exploration of safe shared-control strategies in human-automation systems. The approach combines active automata learning -- to derive coarse, finite-state abstractions of human behaviour from simulations -- with game-theoretic reactive synthesis to determine whether a controller can guarantee safety when interacting with these models. If no such strategy exists, the framework supports iterative refinement of the human model or adjustment of the automation's controllable actions. A driving case study, integrating automata learning with reactive synthesis in UPPAAL, illustrates the applicability of the framework on a simplified driving scenario and its potential for analysing shared-control strategies in human-centred cyber-physical systems.
翻译:以人为中心的系统中自动化水平的持续提升,亟需严谨的设计方法来分析与控制人类与自动化组件之间的交互,尤其是在安全关键型应用中。人类行为的可变性为形式化验证与综合带来了特殊挑战。本文提出一种基于模型的框架,支持在设计阶段探索人机自动化系统中安全共享控制策略。该方法结合主动自动机学习(从仿真中推导人类行为的粗粒度有限状态抽象)与博弈论反应式综合,以判定控制器在与这些模型交互时能否保证安全性。若不存在此类策略,该框架支持迭代优化人类模型或调整自动化系统的可控动作。通过一个驾驶案例研究,在UPPAAL工具中集成自动机学习与反应式综合,展示了该框架在简化驾驶场景中的适用性,及其分析以人为中心的网络物理系统中共享控制策略的潜力。