Prediction of a machine's Remaining Useful Life (RUL) is one of the key tasks in predictive maintenance. The task is treated as a regression problem where Machine Learning (ML) algorithms are used to predict the RUL of machine components. These ML algorithms are generally used as a black box with a total focus on the performance without identifying the potential causes behind the algorithms' decisions and their working mechanism. We believe, the performance (in terms of Mean Squared Error (MSE), etc.,) alone is not enough to build the trust of the stakeholders in ML prediction rather more insights on the causes behind the predictions are needed. To this aim, in this paper, we explore the potential of Explainable AI (XAI) techniques by proposing an explainable regression framework for the prediction of machines' RUL. We also evaluate several ML algorithms including classical and Neural Networks (NNs) based solutions for the task. For the explanations, we rely on two model agnostic XAI methods namely Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). We believe, this work will provide a baseline for future research in the domain.
翻译:对机器剩余使用寿命(RUL)的预测是预测性维护的关键任务之一。 任务被视为一个回归问题, 即机器学习(ML)算法被用来预测机器部件的RUL。 这些 ML算法通常用作黑盒, 完全侧重于性能, 而没有确定算法决定及其工作机制的潜在原因。 我们认为, 仅凭( 中平方错误( MSE) 等) 的性能不足以建立利益攸关方对 ML 预测的信任, 而不是需要更多地了解预测背后的原因。 为此, 我们在本文件中探索可解释的AI (XAI) 技术的潜力, 为机器 RUUL 的预测提出可解释的回归框架 。 我们还评估了包括经典和神经网络( NNS) 在内的若干 ML 算法, 任务解决方案。 关于解释, 我们依赖两种模型 XAI 方法, 即本地间位模型解释( LIME) 和Shapley Additive 解释(SHAP) 将在未来的研究中提供基线。