Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRand, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.
翻译:可解释的人工智能(XAI)领域的最新发展有助于增进对机器学习服务(MLaaS)系统的信任,在该系统中,在对每个问题的答复中,提供解释以及模型预测。然而,XAI也为对手打开了门,以深入了解MLaaS中的黑盒模型,从而使模型更容易受到几次攻击。例如,基于特征的解释(例如SHAPP)可能暴露黑盒模型所关注的最重要的特征。这种披露被利用来制造有效的后门触发器来对付恶意软件分类者。为了解决这一交换,我们引入了在解释中实现本地差异隐私(LDP)的新概念,以及从我们针对此类攻击建立称为XRand的防御机制。我们表明,我们的机制限制了对手能够了解的最重要特征的信息,同时保持解释的忠实性。