As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance. The absence of trustworthy human supervision over the data collection process exposes organizations to security vulnerabilities; training data can be manipulated to control and degrade the downstream behaviors of learned models. The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in this space. In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
翻译:随着机器学习系统的规模扩大,其培训数据要求也随之扩大,迫使从业人员将培训数据整理自动化和外包,以实现最新业绩;由于对数据收集过程缺乏可靠的人力监督,使各组织面临安全脆弱性;培训数据可以被操纵以控制和降低学习模式的下游行为。这项工作的目的是系统地分类和讨论广泛的数据集脆弱性和利用情况、防范这些威胁的方法以及这一空间的一系列公开问题。除了描述各种中毒和后门威胁模式以及它们之间的关系外,我们还开发了它们的统一分类学。