Guessing random additive noise decoding (GRAND) is a maximum likelihood (ML) decoding method that identifies the noise effects corrupting code-words of arbitrary code-books. In a joint detection and decoding framework, this work demonstrates how GRAND can leverage crude soft information in received symbols and channel state information to generate, through guesswork, soft bit reliability outputs in log-likelihood ratios (LLRs). The LLRs are generated via successive computations of Euclidean-distance metrics corresponding to candidate noise-recovered words. Noting that the entropy of noise is much smaller than that of information bits, a small number of noise effect guesses generally suffices to hit a code-word, which allows generating LLRs for critical bits; LLR saturation is applied to the remaining bits. In an iterative (turbo) mode, the generated LLRs at a given soft-input, soft-output GRAND iteration serve as enhanced a priori information that adapts noise-sequence guess ordering in a subsequent iteration. Simulations demonstrate that a few turbo-GRAND iterations match the performance of ML-detection-based soft-GRAND in both AWGN and Rayleigh fading channels at a complexity cost that, on average, grows linearly (instead of exponentially) with the number of symbols.
翻译:猜测随机添加添加噪音解码( GRAND) 是一种最大可能性( ML) 解码方法, 能识别噪音效应腐蚀任意代码簿的编码词。 在联合检测和解码框架内, 这项工作展示了GRAND 如何在接收符号中利用粗软信息, 并引导国家信息, 通过猜测, 以日志类比( LLLR) 生成软小点可靠性输出。 LLLR 是通过与候选噪音回收词相对应的 Euclidean- 远度量的连续计算生成的。 注意到噪音的诱变比信息位要小得多, 少数噪音效应猜测通常足以击中一个代码字, 从而能够生成关键位的 LLRRR; LLRR 饱和度适用于剩余部分。 在反复( 涡轮) 模式中, 生成的LLRRRRLR, 是一个给定的软投入, 软输出 GRANDExeration, 是一个强化的先期信息, 以调整噪音序列猜测顺序, 在随后的版本中, 少量的NWER- LAR- LANDANDLAD 平流中, 显示它在多个的平流中的一些中, 平流中, 平流中, 的平流中, 和平流中, 平流的平流的平流的平流的平流的平流的平流的平流的平流的平流法的平流的平流轨道的平流的平流的平式变法路的平的平的平的平的平的平的平的平的平的平价路的平。