We propose a new weighted average estimator for the high dimensional parameters under the distributed learning system, in which the weight assigned to each coordinate is precisely proportional to the inverse of the variance of the local estimates for that coordinate. This strategy empowers the new estimator to achieve a minimal mean squared error, comparable to the current state-of-the-art one-shot distributed learning methods. While at the same time, the new weighting approach maintains remarkably low communication costs, as each agent is required to transmit only two vectors to the central server. As a result, the newly proposed method achieves optimal statistical efficiency while significantly reducing communication overhead. We further demonstrate the effectiveness of the new estimator by investigating the error bound and the asymptotic properties of the estimation, as well as the numerical performance on some simulated examples and a real data analysis.
翻译:暂无翻译