Adversarial Machine Learning has emerged as a substantial subfield of Computer Science due to a lack of robustness in the models we train along with crowdsourcing practices that enable attackers to tamper with data. In the last two years, interest has surged in adversarial attacks on graphs yet the Graph Classification setting remains nearly untouched. Since a Graph Classification dataset consists of discrete graphs with class labels, related work has forgone direct gradient optimization in favor of an indirect Reinforcement Learning approach. We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents.
翻译:由于我们所培训的模型缺乏稳健性,加上众包做法使袭击者能够篡改数据,反对机器学习已成为计算机科学的一个重要的子领域。在过去两年里,对图表的对抗性攻击引起了人们的极大兴趣,而图表分类设置却几乎没有触及。由于图表分类数据集由带有类标签的离散图表组成,相关工作已经放弃了直接梯度优化,而采用了间接强化学习方法。我们将研究数据中毒(培训时间)利用强化学习代理对神经系统图表分类网络的袭击这一新问题。