In this work, we investigate guessing random additive noise decoding (GRAND) with quantized soft input. First, we analyze the achievable rate of ordered reliability bits GRAND (ORBGRAND), which uses the rank order of the reliability as quantized soft information. We show that multi-line ORBGRAND can approach capacity for any signal-to-noise ratio (SNR). We then introduce discretized soft GRAND (DSGRAND), which uses information from a conventional quantizer. Simulation results show that DSGRAND well approximates maximum-likelihood (ML) decoding with a number of quantization bits that is in line with current soft decoding implementations. For a (128,106) CRC-concatenated polar code, the basic ORBGRAND is able to match or outperform CRC-aided successive cancellation list (CA-SCL) decoding with codeword list size of 64 and 3 bits of quantized soft information, while DSGRAND outperforms CA-SCL decoding with a list size of 128 codewords. Both ORBGRAND and DSGRAND exhibit approximately an order of magnitude less average complexity and two orders of magnitude smaller memory requirements than CA-SCL.
翻译:在这项工作中,我们调查随机添加噪音解码(GRAND)和量化软输入。首先,我们分析可实现的可靠比比(ORBGRAND)的可实现率,该比数使用可靠等级的等级顺序作为量化软信息。我们显示多线ORBGRAND可以接近任何信号对噪音比率(SNR)的能力。然后我们引入使用常规量化器信息的离散软GRAND(DSGRAND),使用传统量化器的信息。模拟结果显示DSGRAND非常接近最大相似度(ML)的解码率,与当前软解码执行一致的若干量化比(ORBGRAND)的可实现率。对于(128,106)CRC专用的极地代码,基本的ORBGRAND能够匹配或超过CRC援助的连续取消清单(C-SCL),与64和3位量化软信息的编码列表大小,而DSGRAND则比128个代码大小的CA-SC平均级的CARC和CAGRA(大约为低级的缩缩缩缩缩缩缩的缩的缩缩缩的缩的缩缩缩缩缩缩缩的缩缩缩缩的缩的缩缩缩缩缩缩的缩的缩缩缩的缩缩的缩缩的缩缩缩缩缩的缩的缩缩的缩缩缩缩缩缩缩缩缩缩缩缩的缩的缩的缩的缩的缩的缩图。