Graphs are the most ubiquitous form of structured data representation used in machine learning. They model, however, only pairwise relations between nodes and are not designed for encoding the higher-order relations found in many real-world datasets. To model such complex relations, hypergraphs have proven to be a natural representation. Learning the node representations in a hypergraph is more complex than in a graph as it involves information propagation at two levels: within every hyperedge and across the hyperedges. Most current approaches first transform a hypergraph structure to a graph for use in existing geometric deep learning algorithms. This transformation leads to information loss, and sub-optimal exploitation of the hypergraph's expressive power. We present HyperSAGE, a novel hypergraph learning framework that uses a two-level neural message passing strategy to accurately and efficiently propagate information through hypergraphs. The flexible design of HyperSAGE facilitates different ways of aggregating neighborhood information. Unlike the majority of related work which is transductive, our approach, inspired by the popular GraphSAGE method, is inductive. Thus, it can also be used on previously unseen nodes, facilitating deployment in problems such as evolving or partially observed hypergraphs. Through extensive experimentation, we show that HyperSAGE outperforms state-of-the-art hypergraph learning methods on representative benchmark datasets. We also demonstrate that the higher expressive power of HyperSAGE makes it more stable in learning node representations as compared to the alternatives.
翻译:图表是机器学习中使用的最普遍的结构化数据代表形式。 但是,它们建模时, 节点之间只有对称关系, 并且没有设计用来对许多真实世界数据集中发现的高阶关系进行编码。 为了模拟这种复杂的关系, 高测图已证明是一种自然代表。 在高测图中学习节点代表形式比图中更复杂, 因为它涉及信息在两个层次的传播: 在每个高端和高端之间。 大多数当前方法首先将高空结构转换为图表, 用于现有的几何深深层次学习算法中。 这种转换导致信息丢失, 并且没有为次优化地利用高测图的表力。 我们展示了超测图中的新高测图学习框架, 利用了两级神经信息传递战略来准确和有效地传播信息。 超测图的灵活设计有助于以不同方式汇集周边信息。 与大多数相关工作不同, 我们的方法受到流行的图形分析法的启发, 是感动性。 因此, 这种转换也可以在前视超测图的超测图表达方式上, 将高校略地用来比较高校略地表示我们所观察到的超标的高级研究, 。