Imaging Atmospheric Cherenkov Telescopes (IACT) of TAIGA astrophysical complex allow to observe high energy gamma radiation helping to study many astrophysical objects and processes. TAIGA-IACT enables us to select gamma quanta from the total cosmic radiation flux and recover their primary parameters, such as energy and direction of arrival. The traditional method of processing the resulting images is an image parameterization - so-called the Hillas parameters method. At the present time Machine Learning methods, in particular Deep Learning methods have become actively used for IACT image processing. This paper presents the analysis of simulated Monte Carlo images by several Deep Learning methods for a single telescope (mono-mode) and multiple IACT telescopes (stereo-mode). The estimation of the quality of energy reconstruction was carried out and their energy spectra were analyzed using several types of neural networks. Using the developed methods the obtained results were also compared with the results obtained by traditional methods based on the Hillas parameters.
翻译:TAIGA-IACT使我们能够从宇宙辐射总通量中选择伽马--夸坦,并恢复其主要参数,如能量和到达方向。处理所产生的图像的传统方法是一种图像参数化,即所谓的希拉斯参数法。目前,机器学习方法,特别是深学习方法,已被积极用于亚马逊协会图像处理。本文介绍了用若干深学习方法对模拟蒙特卡洛图像进行的分析,以研究单一望远镜(线-摩德)和多台ICT望远镜(立体-摩德),对能源重建的质量进行了估计,并用几种神经网络对能量光谱进行了分析。还利用开发的方法将所获得的结果与基于希拉斯参数的传统方法取得的结果进行了比较。