Data confidentiality is becoming a significant concern, especially in the cloud computing era. Memory access patterns have been demonstrated to leak critical information such as security keys and a program's spatial and temporal information. This information leak poses an even more significant privacy challenge in machine learning models with embedding tables. Embedding tables are routinely used to learn categorical features from training data. Even knowing the locations of the embedding table entries accessed, not the data within the embedding table, will compromise categorical input data to the model. Embedding entries are privacy-sensitive since they disclose valuable properties about the user. Oblivious RAM (ORAM), and its enhanced variants such as PathORAM have emerged as viable solutions to hide leakage from memory access streams. In this work, we present LAORAM, an ORAM framework explicitly designed to protect user privacy during embedding table training. LAORAM exploits the unique property of training, the training samples used in the future are known beforehand. LAORAM preprocesses the training samples to identify the memory blocks which are accessed together in the near future. The system tries to assign these blocks to as few paths as possible within the PathORAM infrastructure. LAORAM does this operation by combining multiple blocks accessed together as superblocks. To further increase performance, LAORAM uses a fat-tree structure for PathORAM reducing the number of background evictions required, which improves the stash usage. We have evaluated LAORAM using both a recommendation model (DLRM) and a NLP model (XLM-R) embedding table configurations. LAORAM performs 5 times faster than PathORAM on a recommendation dataset (Kaggle) and 5.4x faster on a NLP dataset (XNLI), while guaranteeing the same security guarantees as the original PathORAM.
翻译:特别是在云计算时代, 内存存存取模式显示会泄露安全密钥以及程序空间和时间信息等关键信息。 这种信息泄漏在嵌入表格的机器学习模型中构成更为重大的隐私挑战。 嵌入表格通常用于从培训数据中学习绝对特征。 即使知道已访问的嵌入表格条目的位置, 而不是嵌入表格内的数据, 也会损害模型的绝对输入数据。 嵌入条目对隐私敏感, 因为它们披露了用户的宝贵属性。 显而易见的 RAM (ORAM) 及其增强的变异( PathORAM) 已经形成一个从存储存取流中隐藏渗漏的可行解决方案 。 在此工作中, 我们展示了 LAORAM( ORARAM) 框架, 明确设计用于在嵌入表格培训期间保护用户隐私的 ORAML框架。 LRAM(L) 利用培训的同一属性, 未来使用的培训样本将使用培训样本, 确定在近期内共同访问的记忆区块。 系统试图将这些块指定为几条路径, 将LARRAMO( LARM) 运行运行运行运行运行的运行将进一步升级, 版本的运行系统将使用SLAPRAMLAD) 。