The successful deployment of artificial intelligence (AI) in many domains from healthcare to hiring requires their responsible use, particularly in model explanations and privacy. Explainable artificial intelligence (XAI) provides more information to help users to understand model decisions, yet this additional knowledge exposes additional risks for privacy attacks. Hence, providing explanation harms privacy. We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations. We have developed several multi-modal transposed CNN architectures that achieve significantly higher inversion performance than using the target model prediction only. These XAI-aware inversion models were designed to exploit the spatial knowledge in image explanations. To understand which explanations have higher privacy risk, we analyzed how various explanation types and factors influence inversion performance. In spite of some models not providing explanations, we further demonstrate increased inversion performance even for non-explainable target models by exploiting explanations of surrogate models through attention transfer. This method first inverts an explanation from the target prediction, then reconstructs the target image. These threats highlight the urgent and significant privacy risks of explanations and calls attention for new privacy preservation techniques that balance the dual-requirement for AI explainability and privacy.
翻译:在从保健到雇用等许多领域成功部署人工智能(AI)需要负责任地使用人工智能(AI),特别是在模型解释和隐私方面。可解释人工智能(XAI)提供了更多的信息,帮助用户理解模型决定,但这种额外知识暴露了隐私攻击的更多风险。因此,提供了伤害隐私的解释。我们研究了图像模型反向袭击的这种风险,并确定了几座攻击结构,其性能越来越强,以便从模型解释中重建私人图像数据。我们开发了一些多式移植CNN结构,其反向性能大大高于目标模型预测。这些XAI觉觉反向模型的设计是为了利用图像解释方面的空间知识。为了了解哪些解释具有更高的隐私风险,我们分析了各种解释类型和因素如何影响反向性表现。尽管有些模型没有提供解释,但我们通过关注转移来利用对代名模型的解释,进一步展示了甚至非解释性目标模型的反向性表现。这种方法首先从目标预测中颠倒了解释,然后重建了目标图像。这些威胁突出了紧急和重大的隐私解释风险,并要求注意新的隐私保护技术,从而平衡双重保密。