With the rise of artificial intelligence and machine learning in modern computing, one of the major concerns regarding such techniques is to provide privacy and security against adversaries. We present this survey paper to cover the most representative papers in poisoning attacks against supervised machine learning models. We first provide a taxonomy to categorize existing studies and then present detailed summaries for selected papers. We summarize and compare the methodology and limitations of existing literature. We conclude this paper with potential improvements and future directions to further exploit and prevent poisoning attacks on supervised models. We propose several unanswered research questions to encourage and inspire researchers for future work.
翻译:随着现代计算机中人工智能和机器学习的兴起,对此类技术的主要关切之一是提供隐私和安全,防止对手受到侵害。我们提出这份调查文件,以涵盖在对受监督的机器学习模式进行毒害袭击中最具代表性的文件。我们首先提供分类,对现有研究进行分类,然后为选定的论文提出详细摘要。我们总结和比较现有文献的方法和局限性。我们最后总结了这一文件,提出了可能的改进和未来的方向,以进一步利用和防止对受监督模型的中毒袭击。我们提出了一些未答复的研究问题,以鼓励和激励研究人员今后开展工作。