Explainability algorithms such as LIME have enabled machine learning systems to adopt transparency and fairness, which are important qualities in commercial use cases. However, recent work has shown that LIME's naive sampling strategy can be exploited by an adversary to conceal biased, harmful behavior. We propose to make LIME more robust by training a generative adversarial network to sample more realistic synthetic data which the explainer uses to generate explanations. Our experiments demonstrate that our proposed method demonstrates an increase in accuracy across three real-world datasets in detecting biased, adversarial behavior compared to vanilla LIME. This is achieved while maintaining comparable explanation quality, with up to 99.94\% in top-1 accuracy in some cases.
翻译:象LIME这样的解释性算法使机器学习系统能够采用透明度和公平性,这是商业使用案件中的重要品质。然而,最近的工作表明,LIME的天真抽样战略可以被对手利用来掩盖偏见和有害的行为。我们提议通过训练一个基因化对抗网络来使LIME更加强大,以抽样比较现实的合成数据,供解释者用来作出解释。我们的实验表明,我们提出的方法表明,与香草LIME相比,三个真实世界数据集在发现偏向和对抗行为方面的准确性有所提高。这是在保持可比的解释质量的同时实现的,在某些情况下,最高精确度达到99.94%。