Recently, machine learning-based channel estimation has attracted much attention. The performance of machine learning-based estimation has been validated by simulation experiments. However, little attention has been paid to the theoretical performance analysis. In this paper, we investigate the mean square error (MSE) performance of machine learning-based estimation. Hypothesis testing is employed to analyze its MSE upper bound. Furthermore, we build a statistical model for hypothesis testing, which holds when the linear learning module with a low input dimension is used in machine learning-based channel estimation, and derive a clear analytical relation between the size of the training data and performance. Then, we simulate the machine learning-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems to verify our analysis results. Finally, the design considerations for the situation where only limited training data is available are discussed. In this situation, our analysis results can be applied to assess the performance and support the design of machine learning-based channel estimation.
翻译:最近,基于机器学习的频道估计吸引了许多注意力。基于机器学习的频道估计的性能通过模拟实验得到验证。然而,很少注意理论性能分析。在本文中,我们研究了机器学习性能估计的平均平方差(MSE)的性能。使用假说测试来分析其基于机器学习的频道估计。此外,我们建立了一个假设性测试统计模型,用于在基于机器学习的频道估计中使用输入量低的线性学习模块,并在培训数据的规模和性能之间建立明确的分析关系。然后,我们模拟基于机器学习性的频道估计,用正方位频率多倍数(OFDM)系统来核查我们的分析结果。最后,讨论了只有有限培训数据的情况的设计考虑。在这种情况下,我们的分析结果可用于评估基于机器学习的频道估计的性能并支持其设计。