Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique for certain high-dimensional linear systems with non-Gaussian distributions. However, AMP only applies to independent identically distributed (IID) transform matrices, but may become unreliable (e.g. perform poorly or even diverge) for other matrix ensembles, especially for ill-conditioned ones. To handle this difficulty, orthogonal/vector AMP (OAMP/VAMP) was proposed for general right-unitarily-invariant matrices. However, the Bayes-optimal OAMP/VAMP requires high-complexity linear minimum mean square error (MMSE) estimator. This limits the application of OAMP/VAMP to large-scale systems. To solve the disadvantages of AMP and OAMP/VAMP, this paper proposes a memory AMP (MAMP), in which a long-memory matched filter is proposed for interference suppression. The complexity of MAMP is comparable to AMP. The asymptotic Gaussianity of estimation errors in MAMP is guaranteed by the orthogonality principle. A state evolution is derived to asymptotically characterize the performance of MAMP. Based on state evolution, the relaxation parameters and damping vector in MAMP are optimized. For all right-unitarily-invariant matrices, the optimized MAMP converges to the high-complexity OAMP/VAMP, and thus is Bayes-optimal if it has a unique fixed point. Finally, simulations are provided to verify the validity and accuracy of the theoretical results.
翻译:近似电文传递( AMP) 是一种低成本的迭代迭代参数估测技术, 用于某些非加西尼亚分布的高维线性系统。 但是, AMP 仅适用于独立的相同分布的( IID) 变异矩阵, 但可能变得不可靠( 例如, 表现差或甚至差异差), 特别是对于条件差的矩阵组合。 要处理这一困难, orthognal/ vector AMP (OAMP/ VAMP), 是为一般右- 统一对流的正反调矩阵而提议的。 然而, 巴耶- 最佳 OAMP/ VAMP 需要高兼容性线性最小平均平方差( MMSE) 。 这限制了 OAMP/ VAMP 的大规模应用。 为了解决 AMP 和 OAMP/ VAMP 的缺点, 本文建议使用一个记忆AMP (MAMP), 其中提出一个长期模拟的精度匹配过滤器来抑制干扰。 MAMP 与 AMP 的精度 的精度 精度 度 度 度 度 度 度 度 度 度 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 值 到 值 到 值 到 值 到 值 到 值 到 值 到 值 到 值 到 值 到 值 到 值 值 值 值 值 值 到 值 到 值 到 值 值 值 值 值 值 值 到 值 值 值 值 值 值 值 值 到 值 到 值 到 值 到 值 值 值 值 到 值 到 值 到 值 到 值 值 值 值 到 值 到 值 值 值 值 值 到 值 到 值 值