Modeling strong gravitational lenses in order to quantify the distortions in the images of background sources and to reconstruct the mass density in the foreground lenses has been a difficult computational challenge. As the quality of gravitational lens images increases, the task of fully exploiting the information they contain becomes computationally and algorithmically more difficult. In this work, we use a neural network based on the Recurrent Inference Machine (RIM) to simultaneously reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps. The method iteratively reconstructs the model parameters (the image of the source and a pixelated density map) by learning the process of optimizing the likelihood given the data using the physical model (a ray-tracing simulation), regularized by a prior implicitly learned by the neural network through its training data. When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions, which we demonstrate by using realistic lensing galaxies taken from the IllustrisTNG cosmological hydrodynamic simulation.
翻译:模拟强大的引力镜头,以量化背景源图像的扭曲,并重建前景镜中质量密度,这是一个困难的计算挑战。随着引力镜图像质量的提高,充分利用其所含信息的任务就变得从计算上和逻辑上更加困难。在这项工作中,我们使用基于经常性推断机(RIM)的神经网络,同时重建未经扭曲的背景源图像和作为像素地图的镜头质量分布。该方法通过学习利用物理模型(射线模拟)优化数据的可能性的过程,通过神经网络以前通过培训数据隐含的学习而实现正规化。与较传统的参数模型相比,拟议方法是显著的更清晰的,可以重建复杂的质量分布,我们通过使用从IllustrisTNG宇宙空间动力学模拟中提取的现实透镜星系来证明这一点。