Stochastic gradient descent (SGD) algorithm is an effective learning strategy to build a latent factor analysis (LFA) model on a high-dimensional and incomplete (HDI) matrix. A particle swarm optimization (PSO) algorithm is commonly adopted to make an SGD-based LFA model's hyper-parameters, i.e, learning rate and regularization coefficient, self-adaptation. However, a standard PSO algorithm may suffer from accuracy loss caused by premature convergence. To address this issue, this paper incorporates more historical information into each particle's evolutionary process for avoiding premature convergence following the principle of a generalized-momentum (GM) method, thereby innovatively achieving a novel GM-incorporated PSO (GM-PSO). With it, a GM-PSO-based LFA (GMPL) model is further achieved to implement efficient self-adaptation of hyper-parameters. The experimental results on three HDI matrices demonstrate that the GMPL model achieves a higher prediction accuracy for missing data estimation in industrial applications.
翻译:粒子群优化(PSO)算法通常采用一种粒子群优化(PSO)算法,使基于SGD的LFA模型的超参数,即学习率和规范化系数、自我适应。然而,标准的PSO算法可能因过早趋同而受到准确性损失。为解决这一问题,本文件将更多的历史信息纳入每个粒子的进化过程,以避免按照通用移动(GM)方法的原则过早地趋同,从而创新地实现新型的GM-集成式PSO(GM-PSO),从而进一步实现基于GM-PSO的LFA模型,以高效地对超参数进行自我适应。关于三个人类发展指数矩阵的实验结果表明,GMPL模型在工业应用中缺失的数据估计方面实现了更高的预测准确性。