Neural quantum states efficiently represent many-body wavefunctions with neural networks, but the cost of Monte Carlo sampling limits their scaling to large system sizes. Here we address this challenge by combining sparse Boltzmann machine architectures with probabilistic computing hardware. We implement a probabilistic computer on field programmable gate arrays (FPGAs) and use it as a fast sampler for energy-based neural quantum states. For the two-dimensional transverse-field Ising model at criticality, we obtain accurate ground-state energies for lattices up to 80 $\times$ 80 (6400 spins) using a custom multi-FPGA cluster. Furthermore, we introduce a dual-sampling algorithm to train deep Boltzmann machines, replacing intractable marginalization with conditional sampling over auxiliary layers. This enables the training of sparse deep models and improves parameter efficiency relative to shallow networks. Using this algorithm, we train deep Boltzmann machines for a system with 35 $\times$ 35 (1225 spins). Together, these results demonstrate that probabilistic hardware can overcome the sampling bottleneck in variational simulation of quantum many-body systems, opening a path to larger system sizes and deeper variational architectures.
翻译:神经量子态通过神经网络高效表示多体波函数,但蒙特卡洛采样的计算成本限制了其向大规模系统的扩展。本文通过将稀疏玻尔兹曼机架构与概率计算硬件相结合来解决这一挑战。我们在现场可编程门阵列(FPGAs)上实现了概率计算机,并将其用作基于能量的神经量子态的快速采样器。对于临界态下的二维横场伊辛模型,我们使用定制的多FPGA集群获得了高达80×80晶格(6400个自旋)的精确基态能量。此外,我们提出了一种用于训练深度玻尔兹曼机的双采样算法,通过辅助层上的条件采样替代了难以处理的边缘化计算。这使得稀疏深度模型的训练成为可能,并相较于浅层网络提高了参数效率。利用该算法,我们成功训练了适用于35×35系统(1225个自旋)的深度玻尔兹曼机。这些成果共同表明,概率硬件能够克服量子多体系统变分模拟中的采样瓶颈,为更大规模系统和更深层变分架构的研究开辟了新路径。