The parallel alternating direction method of multipliers (ADMM) algorithms have become popular in statistics and machine learning due to their ability to efficiently handle large sample data problems. However, the parallel structure of the ADMM algorithms are all based on consensus structure, which can cause too many auxiliary variables for high-dimensional data. In this paper, we focus on nonconvex penalized smooth quantile regression peoblems and develop a parallel linearized ADMM (LADMM) algorithm to solve it. Compared with existing parallel ADMM algorithms, our algorithm does not rely on consensus structure, resulting in a significant reduction in the number of variables that need to be updated at each iteration. It is worth noting that the solution of our algorithm remains unchanged regardless of how the total sample is divided. Furthermore, under some mild assumptions, we prove that the iterative sequence generated by the LADMM converges to a critical point of the nonconvex optimization problem. Numerical experiments on synthetic and real datasets demonstrate the feasibility and validity of the proposed algorithm.
翻译:暂无翻译