The use of AI technologies is percolating into the secure development of software-based systems, with an increasing trend of composing AI-based subsystems (with uncertain levels of performance) into automated pipelines. This presents a fundamental research challenge and poses a serious threat to safety-critical domains (e.g., aviation). Despite the existing knowledge about uncertainty in risk analysis, no previous work has estimated the uncertainty of AI-augmented systems given the propagation of errors in the pipeline. We provide the formal underpinnings for capturing uncertainty propagation, develop a simulator to quantify uncertainty, and evaluate the simulation of propagating errors with two case studies. We discuss the generalizability of our approach and present policy implications and recommendations for aviation. Future work includes extending the approach and investigating the required metrics for validation in the aviation domain.
翻译:暂无翻译