Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union's General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.
翻译:自然语言处理(NLP)任务深度学习(DL)模式往往涉及私人数据,要求保护隐私不受破坏和披露。数据保护法,如欧洲联盟一般数据保护条例(GDPR)等数据保护法(GDPR),从而强制实施对隐私的需求。虽然近年来提出了许多保护隐私的NLP方法,但至今尚未采用任何分类来组织这些方法,因此难以跟踪文献的进展。为缩小这一差距,本篇文章系统地审查了2016年至2020年出版的60多个维护隐私的NLP方法,涉及理论基础、增强隐私的技术以及分析其是否适合现实世界情景。首先,我们引入了将现有方法分为三类的新分类法:数据保护方法、可信任的方法和核查方法。第二,我们广泛概述了隐私威胁、应用程序数据集和隐私评估的衡量标准。第三,在整个审查过程中,我们从整体角度介绍了NLP管道的隐私保护隐私问题。我们讨论了在维护NLP数据可追溯性、计算间接数据、数据设置数据规模、数据设定数据设置数据配置规模、将现有方法分类和今后研究方向的普及性研究指南方面所面临的公开挑战。