We consider ill-posed inverse problems where the forward operator $T$ is unknown, and instead we have access to training data consisting of functions $f_i$ and their noisy images $Tf_i$. This is a practically relevant and challenging problem which current methods are able to solve only under strong assumptions on the training set. Here we propose a new method that requires minimal assumptions on the data, and prove reconstruction rates that depend on the number of training points and the noise level. We show that, in the regime of "many" training data, the method is minimax optimal. The proposed method employs a type of convolutional neural networks (U-nets) and empirical risk minimization in order to "fit" the unknown operator. In a nutshell, our approach is based on two ideas: the first is to relate U-nets to multiscale decompositions such as wavelets, thereby linking them to the existing theory, and the second is to use the hierarchical structure of U-nets and the low number of parameters of convolutional neural nets to prove entropy bounds that are practically useful. A significant difference with the existing works on neural networks in nonparametric statistics is that we use them to approximate operators and not functions, which we argue is mathematically more natural and technically more convenient.
翻译:我们考虑了一些错误的反向问题,即前方操作员不知道$T$,而我们却能够获取由功能构成的培训数据,包括功能为$$i$及其噪音图像组成的培训数据$Tf_$$美元。这是一个实际相关的、具有挑战性的问题,目前的方法只能在对成套培训的有力假设下才能解决。我们在这里建议了一种新的方法,要求对数据作出最低限度的假设,并证明重建率取决于培训点的数量和噪音水平。我们表明,在“许多”培训数据制度中,该方法是最理想的。拟议方法使用了一种由功能构成的神经网络(U-nets)和实验风险最小化,以便“为”未知操作者服务。在一个坚固的假设中,我们的方法基于两个想法:第一个是将U-net与诸如波子等多尺度的分解装置联系起来,从而将其与现有的理论联系起来。第二个方法是使用U-nets的等级结构,以及革命性神经网参数的低,以证明摄像框(U-nets)和实验性风险最小化最小化,以“适合”操作者。一个几乎不起作用。在数学网络上,一个更方便的数学和数学功能与我们更接近的数学操作者之间的最接近。