We give a quantum reduction from finding short codewords in a random linear code to decoding for the Hamming metric. This is the first time such a reduction (classical or quantum) has been obtained. Our reduction adapts to linear codes Stehl\'{e}-Steinfield-Tanaka-Xagawa' re-interpretation of Regev's quantum reduction from finding short lattice vectors to solving the Closest Vector Problem. The Hamming metric is a much coarser metric than the Euclidean metric and this adaptation has needed several new ingredients to make it work. For instance, in order to have a meaningful reduction it is necessary in the Hamming metric to choose a very large decoding radius and this needs in many cases to go beyond the radius where decoding is unique. Another crucial step for the analysis of the reduction is the choice of the errors that are being fed to the decoding algorithm. For lattices, errors are usually sampled according to a Gaussian distribution. However, it turns out that the Bernoulli distribution (the analogue for codes of the Gaussian) is too much spread out and can not be used for the reduction with codes. Instead we choose here the uniform distribution over errors of a fixed weight and bring in orthogonal polynomials tools to perform the analysis and an additional amplitude amplification step to obtain the aforementioned result.
翻译:我们从在随机线性代码中找到短代码来解码Hamming 度量中解码, 从找到一个随机线性代码的短代码到解码标准。 这是第一次获得这样的解码( 古典或量度) 。 我们的解码适应线性代码 Stehl\\ {e}- Steinfield- Tanaka- Xagawa 重新解释 Regev 的解码性减少, 从找到短链性矢量到解决最接近矢量问题。 错差指标比 Euclidean 度值要粗得多, 而这种调整需要几种新元素才能发挥作用 。 例如, 为了在 Hammmming 度 度中进行有意义的解码化, 需要将一个非常大的解码半径进行修改。 用于解码的另一种关键步骤是选择向解码算算法输入错误。 对于 Latttices, 错误通常比 Eugs 分布方式抽样。 但是, 它会把 Bernalli 的分布( 类比重) 和 校正缩分析结果, 而我们用于了 校正缩缩缩缩的校程, 。