In recent years, explainability in machine learning has gained importance. In this context, counterfactual explanation (CE), which is an explanation method that uses examples, has attracted attention. However, it has been pointed out that CE is not robust when there are multiple machine-learning models with similar accuracy. These problems are important when using machine learning to make safe decisions. In this paper, we propose robust CEs that introduce a new viewpoint -- Pareto improvement -- and a method that uses multi-objective optimization to generate it. To evaluate the proposed method, we conducted experiments using both simulated and real data. The results demonstrate that the proposed method is both robust and practical. This study highlights the potential of ensuring robustness in decision-making by applying the concept of social welfare. We believe that this research can serve as a valuable foundation for various fields, including explainability in machine learning, decision-making, and action planning based on machine learning.
翻译:暂无翻译