ICML'21 | 六篇图神经网络论文精选(模型鲁棒性)

ICML'21 | 六篇图神经网络论文精选(模型鲁棒性)

来源:GEAR图学习公众号

原文链接:


国际机器学习大会( International Conference on Machine Learning, ICML)是由国际机器学习学会(IMLS)主办的年度机器学习国际顶级会议,今年论文接收率为21.48%。今天为大家带来ICML 2021六篇关于图神经网络鲁棒性的文章分享。

1. 研究背景

图神经网络作为一种强大的工具,被广泛用于分析图结构数据,并在节点分类、链接预测等图分析任务中取得了巨大的成果。然而,最近几年的研究表明:图神经网络很容易被一些微小的扰动影响,这类扰动被定义为对抗性扰动,经过对抗性扰动构建出来的图被称为对抗样本。研究表明,对抗样本在图学习领域广泛存在,并很大程度上影响着图神经网络的性能,从而限制了其进一步的应用。图神经网络的安全性问题也是该领域的研究热点之一,安全性问题主要包括:模型鲁棒性,模型防窃取,隐私保护等方面。今天介绍的是ICML'21六篇关于模型鲁棒性的文章。

2. Robustness(图神经网络鲁棒性)

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

作者:Zhao, Xin and Zhang, Zeru and Zhang, Zijie and Wu, Lingfei and Jin, Jiayin and Zhou, Yang and Jin, Ruoming and Dou, Dejing and Yan, Da
单位:奧本大学 & 京东等
论文链接:proceedings.mlr.press/v

Highlight: This paper proposes an attack-agnostic graph-adaptive 1-Lipschitz neural network, ERNN, for improving the robustness of deep multiple graph learning while achieving remarkable expressive power.

Graph Neural Networks Inspired by Classical Iterative Algorithms

作者:Yang, Yongyi and Liu, Tang and Wang, Yangkun and Zhou, Jinjing and Gan, Quan and Wei, Zhewei and Zhang, Zheng and Huang, Zengfeng and Wipf, David
单位:复旦大学 & 上海交通大学等
论文链接:proceedings.mlr.press/v

Highlight: To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS).

Elastic Graph Neural Networks

作者:Liu, Xiaorui and Jin, Wei and Ma, Yao and Li, Yaxin and Liu, Hua and Wang, Yiqi and Yan, Ming and Tang, Jiliang
单位:MSU & 山东大学等
论文链接:proceedings.mlr.press/v
代码链接:github.com/lxiaorui/Ela

Highlight: In particular, we propose a novel and general message passing scheme into GNNs.

Interpretable Stability Bounds for Spectral Graph Filters

作者:Henry Kenlay, Dorina Thanou, Xiaowen Dong
单位:牛津大学 & EPFL
论文链接:proceedings.mlr.press/v

Highlight: In this paper, we study filter stability and provide a novel and interpretable upper bound on the change of filter output, where the bound is expressed in terms of the endpoint degrees of the deleted and newly added edges, as well as the spatial proximity of those edges.

Information Obfuscation of Graph Neural Networks

作者:Liao, Peiyuan and Zhao, Han and Xu, Keyulu and Jaakkola, Tommi and Gordon, Geoffrey J. and Jegelka, Stefanie and Salakhutdinov, Ruslan
单位:CMU & UIUC
论文链接:proceedings.mlr.press/v
代码链接:github.com/liaopeiyuan/

Highlight: In this paper, we study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.

Integrated Defense for Resilient Graph Matching

作者:Ren, Jiaxiang and Zhang, Zijie and Jin, Jiayin and Zhao, Xin and Wu, Sixing and Zhou, Yang and Shen, Yelong and Che, Tianshi and Jin, Ruoming and Dou, Dejing
单位:奥本大学 & 北京大学等
论文链接:proceedings.mlr.press/v
代码链接:github.com/liaopeiyuan/

Highlight: In this paper, we identify and study two types of unique topology attacks in graph matching: inter-graph dispersion and intra-graph assembly attacks.

发布于 2021-10-16 23:57