Traditional maximum entropy and sparsity-based algorithms for analytic continuation often suffer from the ill-posed kernel matrix or demand tremendous computation time for parameter tuning. Here we propose a neural network method by convex optimization and replace the ill-posed inverse problem by a sequence of well-conditioned surrogate problems. After training, the learned optimizers are able to give a solution of high quality with low time cost and achieve higher parameter efficiency than heuristic full-connected networks. The output can also be used as a neural default model to improve the maximum entropy for better performance. Our methods may be easily extended to other high-dimensional inverse problems via large-scale pretraining.
翻译:用于分析连续性的传统最大恒温和孔径基算法往往会因不测的内核矩阵而受到影响,或者要求大量计算时间来进行参数调试。在这里,我们建议采用神经网络方法,通过调心优化,用一系列条件良好的代孕问题来取代反向问题。经过培训,学习的优化者能够以低时间成本提供高质量的解决方案,并实现更高的参数效率,而不是超光速的全连网。产出也可以作为一种神经默认模型来改进最大倍增性能,以便提高性能。我们的方法可以很容易地通过大规模预培训推广到其他高维反向问题。