We trained deep neural networks (DNNs) as a function of the neutrino energy density, flux, and the fluid velocity to reproduce the Eddington tensor for neutrinos obtained in our first-principles core-collapse supernova (CCSN) simulations. Although the moment method, which is one of the most popular approximations for neutrino transport, requires a closure relation, none of the analytical closure relations commonly employed in the literature captures all aspects of the neutrino angular distribution in momentum space. In this paper, we developed a closure relation by using the DNN that takes the neutrino energy density, flux, and the fluid velocity as the input and the Eddington tensor as the output. We consider two kinds of DNNs: a conventional DNN named a component-wise neural network (CWNN) and a tensor-basis neural network (TBNN). We found that the diagonal component of the Eddington tensor is reproduced better by the DNNs than the M1-closure relation especially for low to intermediate energies. For the off-diagonal component, the DNNs agree better with the Boltzmann solver than the M1 closure at large radii. In the comparison between the two DNNs, the TBNN has slightly better performance than the CWNN. With the new closure relations at hand based on the DNNs that well reproduce the Eddington tensor with much smaller costs, we opened up a new possibility for the moment method.
翻译:作为中微子能量密度、通量和流体速度的函数,我们训练了深神经网络(DNNs),作为中微子能量密度、通量和流体速度的函数。我们用Eddington 温度复制在我们第一个原则中获得的中微子核心折叠超新模拟(CCSN)中获得的中微子。虽然当时的方法是中微子运输最受欢迎的近似方法之一,但需要关闭关系,文献中通常使用的分析关闭关系中微子角分布在动力空间中的所有方面都没有捕获。在本文中,我们通过使用DNNN(D)来生成关闭关系,特别是以中微子能量密度、通量和流体速度作为输入和输出。我们认为两种DNNNN(D)方法:一种传统DNNNN(CNN(CNNN)是中小于中低能量的M(NNNN) 。我们发现ED(D-NNN(N)比NN(NNN)小于NN(NNN)小的N(M(NNNNN)比N(NNNNN)小的) 硬(M(NNNNNN)的)比NU)更接近性能(M(NNU)的硬)的硬(B)的硬(B)(B)比NNU)(B)(B)(NNNNU)(NNNU)的)的)(B)更接近性)(B),在硬的)(B)(NNNNNT)(B)(B)(NU)(ND)(NND)的)(N)(N)(较小的)(较小的)的)的)比。