While most of the current synthesis algorithms only focus on correctness-by-construction, ensuring robustness has remained a challenge. Hence, in this paper, we address the robust-by-construction synthesis problem by considering the specifications to be expressed by a robust version of Linear Temporal Logic (LTL), called robust LTL (rLTL). rLTL has a many-valued semantics to capture different degrees of satisfaction of a specification, i.e., satisfaction is a quantitative notion. We argue that the current algorithms for rLTL synthesis do not compute optimal strategies in a non-antagonistic setting. So, a natural question is whether there is a way of satisfying the specification "better" if the environment is indeed not antagonistic. We address this question by developing two new notions of strategies. The first notion is that of adaptive strategies, which, in response to the opponent's non-antagonistic moves, maximize the degree of satisfaction. The idea is to monitor non-optimal moves of the opponent at runtime using multiple parity automata and adaptively change the system strategy to ensure optimality. The second notion is that of strongly adaptive strategies, which is a further refinement of the first notion. These strategies also maximize the opportunities for the opponent to make non-optimal moves. We show that computing such strategies for rLTL specifications is not harder than the standard synthesis problem, e.g., computing strategies with LTL specifications, and takes doubly-exponential time.
翻译:虽然目前大多数综合算法都只注重整治,但确保稳健性仍然是一项挑战。因此,在本文件中,我们通过考虑强势的线性时空逻辑(LTL),称为强力的LTLL(rLTL),来反映一个规格的不同程度的满意度,即满意度是一个量化概念。我们认为,当前RLTL合成的算法并不在非动态式环境中计算最佳战略。因此,一个自然的问题是,如果环境确实不是对抗性的,我们能否满足“更好”的规格。我们通过制定两个新的战略概念来解决这个问题。第一个概念是适应性战略,针对对手的非动态性动作,即满意度是一个量化概念。我们的观点是,在运行时使用多重对等自动数据和适应性地改变系统战略以确保最佳性时,是否满足“最优”的规格。第二个概念是更难调整的战略,也就是我们更难调整的变现性战略。我们这个概念是更精确性战略的更精确性战略,而不是更精确性战略。我们这个概念是更精确的更精确性战略。