We propose a novel framework for the regularised inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalises these variables with tailored Bregman distances. We propose a family of variational regularisations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularised inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularisation operator, and that shows that the regularised inverse provably converges to the true inverse if measurement errors converge to zero.
翻译:我们为深海神经网络的常规反转提出了一个新框架。框架基于作者最近关于培训进料向神经网络的工作,没有区分激活功能。框架通过引入辅助变量将参数空间提升到更高的维空间,并以定制的Bregman距离对这些变量进行处罚。我们建议基于这些Bregman距离的变异常规化组合,提出理论结果,并以数字实例支持其实际应用。特别是,我们提出了对单层感知器进行常规化转换的第一个趋同结果(我们最了解的结果 ), 它仅假设反向问题的解决方案在常规化操作者的范围, 并表明如果测量误差趋同为零, 常规化的反常化会与真实相趋近。</s>