Research in machine learning has successfully developed algorithms to build accurate classification models. However, in many real-world applications, such as healthcare, customer satisfaction, and environment protection, we want to be able to use the models to decide what actions to take. We investigate the concept of actionability in the context of Support Vector Machines. Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic. Actionability is the task that gives us ways to act upon machine learning models and their predictions. This paper finds a solution to the question of actionability on both linear and non-linear SVM models. Additionally, we introduce a way to account for weighted actions that allow for more change in certain features than others. We propose a gradient descent solution on the linear, RBF, and polynomial kernels, and we test the effectiveness of our models on both synthetic and real datasets. We are also able to explore the model's interpretability through the lens of actionability.
翻译:机器学习研究成功地开发了算法,以建立准确的分类模型。然而,在许多现实应用中,例如保健、客户满意度和环境保护,我们希望能够利用模型来决定要采取的行动。我们调查了支持矢量机的可操作性概念。可操作性与机器学习模型的可解释性或可解释性同样重要,这是一个持续和重要的研究专题。可操作性是使我们得以根据机器学习模型及其预测采取行动的任务。本文为线性和非线性SVM模型的可操作性问题找到解决办法。此外,我们引入了一种方法来核算加权行动,使某些特征的变化大于其他特征。我们提出了线性、RBF和多核内核的梯度下降解决方案,我们测试了我们的模型在合成和真实数据集上的有效性。我们还能够通过可操作性透镜来探索模型的可解释性。