A very popular class of models for networks posits that each node is represented by a point in a continuous latent space, and that the probability of an edge between nodes is a decreasing function of the distance between them in this latent space. We study the embedding problem for these models, of recovering the latent positions from the observed graph. Assuming certain natural symmetry and smoothness properties, we establish the uniform convergence of the log-likelihood of latent positions as the number of nodes grows. A consequence is that the maximum likelihood embedding converges on the true positions in a certain information-theoretic sense. Extensions of these results, to recovering distributions in the latent space, and so distributions over arbitrarily large graphs, will be treated in the sequel.
翻译:一个非常受欢迎的网络模型类别认为,每个节点都代表着一个连续潜伏空间中的点,节点之间的边缘概率是它们在这个潜伏空间中距离的越来越短的功能。我们研究这些模型的嵌入问题,即从观察到的图形中恢复潜伏位置的问题。假设某些自然对称和平稳的特性,随着节点的增加,我们确定潜伏位置的日志相似性的统一趋同。结果就是,在某种信息理论意义上,最有可能嵌入的真实位置。这些结果的延伸,即恢复潜伏空间的分布,以及任意的大图的分布,将在续集中处理。