As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called ``Explainable AI (XAI)'' or ``Explainable ML.'' The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML.
翻译:由于机器学习(ML)系统在促进影响生命的决策方面发挥着更加突出和核心的作用,确保其可信度和问责制至关重要。解释是ML系统这些可取属性的核心。新兴领域经常被称为“可解释的AI(XAI)”或“可解释的ML ” 。可解释的ML的目标是直截了当地解释对ML系统的预测,同时满足各利益攸关方的需要。许多解释技术是在学术界和工业界的贡献下开发的。然而,现有的一些挑战尚未获得足够的兴趣,成为广泛采用可解释的ML的障碍。在这个简短的文件中,我们从行业角度列举了可解释的ML的挑战。我们希望这些挑战将成为有希望的未来研究方向,并有助于使可解释的ML民主化。