We consider the symmetric binary perceptron model, a simple model of neural networks that has gathered significant attention in the statistical physics, information theory and probability theory communities, with recent connections made to the performance of learning algorithms in Baldassi et al. '15. We establish that the partition function of this model, normalized by its expected value, converges to a lognormal distribution. As a consequence, this allows us to establish several conjectures for this model: (i) it proves the contiguity conjecture of Aubin et al. '19 between the planted and unplanted models in the satisfiable regime; (ii) it establishes the sharp threshold conjecture; (iii) it proves the frozen 1-RSB conjecture in the symmetric case, conjectured first by Krauth-M\'ezard '89 in the asymmetric case. In a recent work of Perkins-Xu '21, the last two conjectures were also established by proving that the partition function concentrates on an exponential scale, under an analytical assumption on a real-valued function. This left open the contiguity conjecture and the lognormal limit characterization, which are established here unconditionally, with the analytical assumption verified. In particular, our proof technique relies on a dense counter-part of the small graph conditioning method, which was developed for sparse models in the celebrated work of Robinson and Wormald.
翻译:我们认为,对称的二进制透视模型是神经网络的简单模型,在统计物理、信息理论和概率理论界引起了人们的极大关注,最近与巴尔达西等人 '15的学习算法的运行建立了联系。我们确定,这一模型的分割功能按其预期值归正,会归顺逻辑分布。因此,这使我们能够为这一模型建立若干猜想:(一) 它证明了Aubin et al. '19在可视化制度中植入和未植入模型之间的毗连性猜想;(二) 它建立了尖尖点猜想;(三) 它证明了在对称的案例中被冻结的1-RSB猜想,首先被Krauth-M\\'ezard '89在不对称案件中的推断。在Perkins-Xu '21的近期工作中,最后两个估计也通过证明分化函数集中在一个指数尺度的尺度上,在对真实估价功能的分析假设下,它建立了尖锐的起始点;(三) 它证明了在对正轨模型的精确度分析模型中,这留下了一种分析模型的精确性分析基础, 也就是的模型的模型的精确性模型的精确性模型的模型的精确性分析基础, 也就是的模型的模型的模型的精确性模型的模型的精确性模型的模型的精确性模型的精确性模型的精确性模型是。