Consider the unsupervised classification problem in random hypergraphs under the non-uniform Hypergraph Stochastic Block Model (HSBM) with two equal-sized communities, where each edge appears independently with some probability depending only on the labels of its vertices. In this paper, the information-theoretic limits on the clustering accuracy and the strong consistency threshold are established, expressed in terms of the generalized Hellinger distance. Below the threshold, it is impossible to assign all vertices to their own communities, and the lower bound of the expected mismatch ratio is derived. On the other hand, the problem space is (sometimes) divided into two disjoint subspaces when above the threshold. When only the contracted adjacency matrix is given, with high probability, one-stage spectral algorithms succeed in assigning every vertex correctly in the subspace far away from the threshold but fail in the other one. Two subsequent refinement algorithms are proposed to improve the clustering accuracy, which attain the lowest possible mismatch ratio, previously derived from the information-theoretical perspective. The failure of spectral algorithms in the second subspace arises from the loss of information induced by tensor contraction. The origin of this loss and possible solutions to minimize the impact are presented. Moreover, different from uniform hypergraphs, strong consistency is achievable by aggregating information from all uniform layers, even if it is impossible when each layer is considered alone.
翻译:暂无翻译