It is well known that for Gaussian channels, a nearest neighbor decoding rule, which seeks the minimum Euclidean distance between a codeword and the received channel output vector, is the maximum likelihood solution and hence capacity-achieving. Nearest neighbor decoding remains a convenient and yet mismatched solution for general channels, and the key message of this paper is that the performance of the nearest neighbor decoding can be improved by generalizing its decoding metric to incorporate channel state dependent output processing and codeword scaling. Using generalized mutual information, which is a lower bound to the mismatched capacity under independent and identically distributed codebook ensemble, as the performance measure, this paper establishes the optimal generalized nearest neighbor decoding rule, under Gaussian channel input. Several {restricted forms of the} generalized nearest neighbor decoding rule are also derived and compared with existing solutions. The results are illustrated through several case studies for fading channels with imperfect receiver channel state information and for channels with quantization effects.
翻译:众所周知,对于Gaussian 频道来说,近邻解码规则是最大的可能解决方案,因此也是实现能力的最大可能。 近邻解码仍然是一般频道的方便且不匹配的解决办法,本文的关键信息是,最近的邻居解码功能可以通过将其解码指标的通用化来改进,以纳入频道依赖国的产出处理和编码缩放。 使用普遍化的相互信息,作为绩效衡量标准,在独立和同样分布的代码簿共同体下,与不匹配的能力的连接度较低,本文在高斯频道输入下确立了最理想的最接近的邻居解码规则。一些(限制形式的)近邻通用解码规则也衍生出来,并与现有解决方案进行比较。通过对不完善的接收器频道状态信息的淡化渠道和具有量化效果的渠道进行若干案例研究,展示了这些结果。