Traditional maximum entropy and sparsity-based algorithms for analytic continuation often suffer from the ill-posed kernel matrix or demand tremendous computation time for parameter tuning. Here we propose a neural network method by convex optimization and replace the ill-posed inverse problem by a sequence of well-conditioned surrogate problems. After training, the learned optimizers are able to give a solution of high quality with low time cost and achieve higher parameter efficiency than heuristic fully-connected networks. The output can also be used as a neural default model to improve the maximum entropy for better performance. Our methods may be easily extended to other high-dimensional inverse problems via large-scale pretraining.
翻译:用于分析连续性的传统最大恒温和孔径基算法往往会因不测的内核矩阵而受到影响,或者要求大量计算时间来进行参数调试。在这里,我们建议采用神经网络方法,通过配置最优化,用一系列条件良好的代孕问题取代反向问题。经过培训,学习到的优化者能够以低时间成本提供高质量的解决方案,并实现更高的参数效率,而不是完全连接的超光速网络。产出也可以用作神经默认模型,以改善最大增缩率,提高性能。我们的方法很容易通过大规模预培训推广到其他高维反向问题。