Hypergraphs are a generalized data structure of graphs to model higher-order correlations among entities, which have been successfully adopted into various research domains. Meanwhile, HyperGraph Neural Network (HGNN) is currently the de-facto method for hypergraph representation learning. However, HGNN aims at single hypergraph learning and uses a pre-concatenation approach when confronting multi-modal datasets, which leads to sub-optimal exploitation of the inter-correlations of multi-modal hypergraphs. HGNN also suffers the over-smoothing issue, that is, its performance drops significantly when layers are stacked up. To resolve these issues, we propose the Residual enhanced Multi-Hypergraph Neural Network, which can not only fuse multi-modal information from each hypergraph effectively, but also circumvent the over-smoothing issue associated with HGNN. We conduct experiments on two 3D benchmarks, the NTU and the ModelNet40 datasets, and compare against multiple state-of-the-art methods. Experimental results demonstrate that both the residual hypergraph convolutions and the multi-fusion architecture can improve the performance of the base model and the combined model achieves a new state-of-the-art. Code is available at \url{https://github.com/OneForward/ResMHGNN}.
翻译:超格神经网络(HGNN)目前是超高光谱代表制学习的脱法方法,然而,HGNN旨在进行单高光学学习,并在面对多模式数据集时采用预先解析方法,这导致对多模式高光谱之间相互关系的亚最佳利用。HGNN也存在过度消沉的问题,也就是说,当层叠叠时,其性能会显著下降。为了解决这些问题,我们提议采用后端强化多功能神经网络(HGNNN),它不仅能够有效地将每个超光谱的多模式信息连接起来,而且能够绕过与HGNNNN相关的超模式问题。我们根据两个3D基准,即NTU和MdemodelNet40数据集进行实验,并与多种状态技术方法进行比较。实验结果显示,剩余超音速神经网络模型和多功能模型都能够改进现有的基础结构。