As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, the sensitive information in the collected training data raises privacy concerns. Recent research indicated that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. It is noteworthy that the performance of the attacks varies from model to model. In this paper, we conduct empirical analyses to answer a fundamental question: Does model architecture affect model privacy? We investigate several representative model architectures from CNNs to Transformers, and show that Transformers are generally more vulnerable to privacy attacks than CNNs. We further demonstrate that the micro design of activation layers, stem layers, and bias parameters, are the major reasons why CNNs are more resilient to privacy attacks than Transformers. We also find that the presence of attention modules is another reason why Transformers are more vulnerable to privacy attacks. We hope our discovery can shed some new light on how to defend against the investigated privacy attacks and help the community build privacy-friendly model architectures.
翻译:作为过去十年中一个蓬勃发展的研究领域,深层次学习技术是由空前规模的大型数据所驱动的。然而,所收集的培训数据中的敏感信息引起了隐私方面的关注。最近的研究表明,深层次学习模式容易受到各种隐私攻击,包括会员推论攻击、推推论攻击和梯度反向攻击。值得注意的是,袭击的表现因模型和模型的不同而不同。在本文中,我们进行了经验分析,以回答一个基本问题:模型架构是否影响模型隐私?我们调查了从CNN到变异器的几个具有代表性的模型架构,并表明变异器一般比CNN更容易受到隐私攻击。我们进一步表明,启动层、干层和偏差参数的微观设计是CNN比变异器更能抵御隐私攻击的主要原因。我们还发现,关注模块的存在是变异器更容易受到隐私攻击的另一个原因。我们希望我们的发现能够为如何防范被调查的隐私攻击并帮助社区建立隐私友好型架构提供一些新线索。