Out-of-distribution detection is one of the most critical issue in the deployment of machine learning. The data analyst must assure that data in operation should be compliant with the training phase as well as understand if the environment has changed in a way that autonomous decisions would not be safe anymore. The method of the paper is based on eXplainable Artificial Intelligence (XAI); it takes into account different metrics to identify any resemblance between in-distribution and out of, as seen by the XAI model. The approach is non-parametric and distributional assumption free. The validation over complex scenarios (predictive maintenance, vehicle platooning, covert channels in cybersecurity) corroborates both precision in detection and evaluation of training-operation conditions proximity. Results are available via open source and open data at the following link: https://github.com/giacomo97cnr/Rule-based-ODD.
翻译:数据分析员必须确保运行中的数据符合培训阶段的要求,并了解环境是否已经发生变化,使自主决定不再安全; 该文件的方法以可移植人工智能为基础(XAI); 如XAI模式所示,该文件考虑到不同的衡量标准,以确定分配和分配之间的任何相似之处; 这种方法是非参数和无分配假设的; 对复杂假设的验证(预先维护、车辆排队、网络安全隐蔽渠道)证实发现和评价训练-业务条件接近情况的准确性; 成果通过公开来源和以下链接的开放数据提供:https://github.com/giao97cnr/Ruder-based-ODDD。</s>