It is well known that for linear Gaussian channels, a nearest neighbor decoding rule, which seeks the minimum Euclidean distance between a codeword and the received channel output vector, is the maximum likelihood solution and hence capacity-achieving. Nearest neighbor decoding remains a convenient and yet mismatched solution for general channels, and the key message of this paper is that the performance of the nearest neighbor decoding can be improved by generalizing its decoding metric to incorporate channel state dependent output processing and codeword scaling. Using generalized mutual information, which is a lower bound to the mismatched capacity under independent and identically distributed codebook ensemble, as the performance measure, this paper establishes the optimal generalized nearest neighbor decoding rule, under Gaussian channel input. Several suboptimal but reduced-complexity generalized nearest neighbor decoding rules are also derived and compared with existing solutions. The results are illustrated through several case studies for channels with nonlinear effects, and fading channels with receiver channel state information or with pilot-assisted training.
翻译:众所周知,对于直线高斯海峡频道来说,最近的邻居解码规则是寻求编码词和接收的频道输出矢量之间最小的欧几里德距离的一条近邻解码规则,这是最大的可能解决办法,因此也是实现能力的最佳办法。 近邻解码仍然是一般频道的一个方便但又不匹配的解决办法,本文的关键信息是,最近的邻居解码规则的性能可以通过将其解码指标的通用化来改进,以纳入频道依赖国家的产出处理和编码缩放。 使用普遍化的相互信息,即独立和相同分布的编码词串联下不匹配能力受约束的程度较低的信息,作为业绩计量,本文在高斯海峡投入下确立了最接近近的近邻解码规则。还得出了几个不理想但简化的近邻解码规则,并与现有的解决办法进行了比较。通过对具有非线性效果的频道进行的若干案例研究,以及利用接收器频道信息或试点辅助培训的淡化渠道,展示了结果。