We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models. Comparing with the previous experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension $n = 26$, noise rate $\tau = 0.498$, the ''Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances.
翻译:我们利用神经网络进行系统研究,以解决与噪音问题(LPN)的学习对等性。 我们的主要贡献是设计两层神经网络的家庭,这些网络在高噪音、低分化制度中实际上优于经典算法。 我们考虑三种环境,即LPN样本数量丰富、非常有限和介于两者之间。 在每种环境中,我们提供神经网络模型,以尽可能快地解决LPN。 对于某些环境,我们还能够提供解释我们模型设计理由的理论。 与Esser, Kubler和May(CRYPTO 2017)的实验相比, 在尺寸=26美元的情况下, 噪音率=0.498美元, “ Guess- ex- the- Gau- Enelling” 算法在64 CPU核心上需要3.12天, 而我们的神经网络算法在8 GPPS上也需要66分钟。 我们的算法也可以插入混合算法, 用于解决中或大尺寸LPN实例。</s>