Approximate nearest neighbour (ANN) search is an essential component of search engines, recommendation systems, etc. Many recent works focus on learning-based data-distribution-dependent hashing and achieve good retrieval performance. However, due to increasing demand for users' privacy and security, we often need to remove users' data information from Machine Learning (ML) models to satisfy specific privacy and security requirements. This need requires the ANN search algorithm to support fast online data deletion and insertion. Current learning-based hashing methods need retraining the hash function, which is prohibitable due to the vast time-cost of large-scale data. To address this problem, we propose a novel data-dependent hashing method named unfolded self-reconstruction locality-sensitive hashing (USR-LSH). Our USR-LSH unfolded the optimization update for instance-wise data reconstruction, which is better for preserving data information than data-independent LSH. Moreover, our USR-LSH supports fast online data deletion and insertion without retraining. To the best of our knowledge, we are the first to address the machine unlearning of retrieval problems. Empirically, we demonstrate that USR-LSH outperforms the state-of-the-art data-distribution-independent LSH in ANN tasks in terms of precision and recall. We also show that USR-LSH has significantly faster data deletion and insertion time than learning-based data-dependent hashing.
翻译:近似最近邻(ANN)搜索是搜索引擎、推荐系统等的重要组成部分。许多最近的工作关注于基于学习的数据分布依赖哈希,取得了良好的检索性能。然而,由于用户隐私和安全的需求不断增加,我们常常需要从机器学习(ML)模型中删除用户数据信息,以满足特定的隐私和安全要求。这种需求需要ANN搜索算法支持快速的在线数据删除和插入。目前的基于学习的哈希方法需要重新训练哈希函数,由于大规模数据的巨大时间成本,这是不可接受的。为了解决这个问题,我们提出了一种新的数据依赖性哈希方法,称为unfolded self-reconstruction locality-sensitive hashing(USR-LSH)。我们的USR-LSH展开了基于实例的数据重构的优化更新,这比基于数据相互独立的LSH更好地保留了数据信息。此外,我们的USR-LSH支持快速的在线数据删除和插入,无需重新训练。据我们所知,我们是第一个解决检索问题机器遗忘的人。在实证方面,我们证明了USR-LSH在ANN任务中的检索性能(精度和召回率)优于现有的基于数据分布相互独立的LSH方法。我们还表明USR-LSH比基于学习的数据依赖哈希具有显著更快的数据删除和插入时间。