The concept of the smoothing parameter plays a crucial role in both lattice-based and code-based cryptography, primarily due to its effectiveness in achieving nearly uniform distributions through the addition of noise. Recent research by Pathegama and Barg has determined the optimal smoothing bound for random codes under R\'enyi Divergence for any order $\alpha \in (1, \infty)$ \cite{pathegama2024r}. Considering the inherent complexity of encoding/decoding algorithms in random codes, our research introduces enhanced structural elements into these coding schemes. Specifically, this paper presents a novel derivation of the smoothing bound for random linear codes, maintaining the same order of R\'enyi Divergence and achieving optimality for any $\alpha\in (1,\infty)$. We extend this framework under KL Divergence by transitioning from random linear codes to random self-dual codes, and subsequently to random quasi-cyclic codes, incorporating progressively more structures. As an application, we derive an average-case to average-case reduction from the Learning Parity with Noise (LPN) problem to the average-case decoding problem. This reduction aligns with the parameter regime in \cite{debris2022worst}, but uniquely employs R\'enyi divergence and directly considers Bernoulli noise, instead of combining ball noise and Bernoulli noise.
翻译:暂无翻译