2020 年的 ICML 前几天结束了,图神经网络也是今年的一大热点,这里总结一下 GNN 的文章,贴个链接和概述备忘。ICML 似乎没有给 tag,所以只能总结一下个人看到的 GNN 的 paper,估计少了好多,欢迎补充。
今年 ICML 搞了一个 graph representation 的 workshop,里面有不少有意思的文章,贴一下链接,workshop 里面的文章就不在下文出现了。
https://grlplus.github.io
本文首发于作者知乎专栏:图神经网络实战,里面有各种 GNN 相关文章,欢迎关注。
https://zhuanlan.zhihu.com/graph-neural-networks
We address two fundamental questions about graph neural networks (GNNs).
In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
Here, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces.
In this paper, we present NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs.
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs) that generalizes existing stochastic regularization methods for training GNNs. (dropout与dropedge升级版)
We propose continuous graph neural networks (CGNN), which generalise existing graph neural networks with discrete dynamics in that they can be viewed as a specific discretisation scheme.
In this paper, we study the problem of designing and analyzing deep graph convolutional networks.
In this paper, we study the graph classification problem from the graph homomorphism perspective.
In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs.
This paper presents a new Graph Neural Network (GNN) type using feature-wise linear modulation (FiLM).
We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs.
We propose a new framework, called Poisson learning, for graph based semi-supervised learning at very low label rates.
We propose the Interferometric Graph Transform (IGT), which is a new class of deep unsupervised graph convolutional neural network for building graph representations.
In this paper, we propose a graph clustering approach that addresses these limitations of SC.
Based on this, we develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to . (非常优雅的方法)
In this paper, we propose a novel template-free approach called G2Gs by transforming a target molecular graph into a set of reactant molecular graphs.
In this paper, we propose a new hierarchical graph encoder-decoder that employs significantly larger and more flexible graph motifs as basic building blocks.
Here we present a general framework for learning simulation, and provide a single model implementation that yields state-of-the-art performance across a variety of challenging physical domains, involving fluids, rigid solids, and deformable materials interacting with one another.
In this work, we develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
Here we propose a framework for learning AMG prolongation operators for linear systems with sparse symmetric positive (semi-) definite matrices.
In this work, we propose GraphOpt, an end-to-end framework that jointly learns an implicit model of graph structure formation and discovers an underlying optimization mechanism in the form of a latent objective function.
Program repair is challenging for two reasons: First, it requires reasoning and tracking symbols across source code and diagnostic feedback. Second, labeled datasets available for program repair are relatively small. In this work, we propose novel solutions to these two challenges.
Here, we propose a graph neural network based relation prediction framework, GraIL, that reasons over local subgraph structures and has a strong inductive bias to learn entity-independent relational semantics.
We present a framework that models a percept as weak relations between a current utterance and its history.
To address this gap, we leverage the original graph convolution in GCN and propose a Low-pass Collaborative Filter (LCF) to make it applicable to the large graph.
We propose a novel Bayesian meta-learning approach to effectively learn the posterior distributions of the prototype vectors of tasks, where the initial prior of the prototype vectors is parameterized with a graph neural network on the global task graph.