The growing complexity of Cyber-Physical Systems (CPS) and challenges in ensuring safety and security have led to the increasing use of deep learning methods for accurate and scalable anomaly detection. However, machine learning (ML) models often suffer from low performance in predicting unexpected data and are vulnerable to accidental or malicious perturbations. Although robustness testing of deep learning models has been extensively explored in applications such as image classification and speech recognition, less attention has been paid to ML-driven safety monitoring in CPS. This paper presents the preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS against two types of accidental and malicious input perturbations, generated using a Gaussian-based noise model and the Fast Gradient Sign Method (FGSM). We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency. Experimental results with two case studies of Artificial Pancreas Systems (APS) for diabetes management show that ML-based safety monitors trained with domain knowledge can reduce on average up to 54.2% of robustness error and keep the average F1 scores high while improving transparency.
翻译:虽然在图像分类和语音识别等应用中广泛探索了对深层次学习模型的稳健性测试,但较少注意CPS中由ML驱动的安全监测。本文件介绍了在安全临界的CPS对两种类型的意外和恶意输入的侵扰情况下评价以ML为基础的异常探测方法是否稳健的初步结果,这两种类型的输入都是用高斯语的噪音模型和快速梯度信号方法生成的。我们测试了将域知识(例如不安全系统行为)与ML模型相结合的假设,认为在不牺牲准确性和透明度的情况下,异常检测的稳健性得到降低。在糖尿病管理方面,基于Atficial Pancreas系统的两项案例研究的实验结果显示,MLF1基于安全度的平均测量率和54度平均测量率的测量率可以降低对54%的平均测量率。