Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widespread adoption in practical applications. However, this opportunity comes with significant underlying risks, as many of these models rely on privacy-sensitive data for training in a variety of applications, making them an overly-exposed threat surface for privacy violations. Furthermore, the widespread use of cloud-based Machine-Learning-as-a-Service (MLaaS) for its robust infrastructure support has broadened the threat surface to include a variety of remote side-channel attacks. In this paper, we first identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in DL implementations originating from non-constant time branching operation in a widely used DL framework PyTorch. We further demonstrate a practical inference-time attack where an adversary with user privilege and hard-label black-box access to an MLaaS can exploit Class Leakage to compromise the privacy of MLaaS users. DL models are vulnerable to Membership Inference Attack (MIA), where an adversary's objective is to deduce whether any particular data has been used while training the model. In this paper, as a separate case study, we demonstrate that a DL model secured with differential privacy (a popular countermeasure against MIA) is still vulnerable to MIA against an adversary exploiting Class Leakage. We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage and also aids in mitigating MIA. We have chosen two standard benchmarking image classification datasets, CIFAR-10 and CIFAR-100 to train five state-of-the-art pre-trained DL models, over two different computing environments having Intel Xeon and Intel i7 processors to validate our approach.
翻译:最近在解决复杂的现实世界任务方面的深度学习(DL)进展导致其在实际应用中被广泛采用。然而,这一机会带来了重大的潜在风险,因为许多这些模型依靠对隐私敏感的数据进行各种应用的培训,使这些模型成为侵犯隐私的过度威胁表面。此外,广泛使用基于云的机器学习(DL)系统来提供强有力的基础设施支持,扩大了威胁面,包括各种远程侧道袭击。在本文中,我们首先发现并报告了在DL执行中,一种新的数据依赖的时道侧渗漏(Melated Sliakage),源自于各种应用中的隐私敏感数据敏感数据数据,使其在广泛使用的DLeltor框架 PyTorrch中成为过度暴露的威胁表面。我们进一步展示了一种实际的发回时间攻击,在使用用户特权和硬标签黑箱访问MlaaS系统之前,利用了级渗漏模型来损害MLaaS用户的隐私。DLalway-S的用户隐私。DLl模型很容易被使用,在成员推导攻击(MIA)中,在Dalwayeral A中,在使用一个不固定的服务器上,我们使用一个固定的图像中,我们使用了一个固定的图像分析中,在使用一个简单的分析中,在使用一个简单的分析中,在使用一个我们使用一个稳定的图像上,我们使用一个简单的数据,我们使用一个直径上,在使用一个直判中,在使用一个数据。