Trusting software systems, particularly autonomous ones, is challenging. To address this, formal verification techniques can ensure these systems behave as expected. Runtime Verification (RV) is a leading, lightweight method for verifying system behaviour during execution. However, traditional RV assumes perfect information, meaning the monitoring component perceives everything accurately. This assumption often fails, especially with autonomous systems operating in real-world environments where sensors might be faulty. Additionally, traditional RV considers the monitor to be passive, lacking the capability to interpret the system's information and thus unable to address incomplete data. In this work, we extend standard RV of Linear Temporal Logic properties to accommodate scenarios where the monitor has imperfect information and behaves rationally. We outline the necessary engineering steps to update the verification pipeline and demonstrate our implementation in a case study involving robotic systems.
翻译:暂无翻译