Neural networks have become ubiquitous tools for solving signal and image processing problems, and they often outperform standard approaches. Nevertheless, training neural networks is a challenging task in many applications. The prevalent training procedure consists of minimizing highly non-convex objectives based on data sets of huge dimension. In this context, current methodologies are not guaranteed to produce global solutions. We present an alternative approach which foregoes the optimization framework and adopts a variational inequality formalism. The associated algorithm guarantees convergence of the iterates to a true solution of the variational inequality and it possesses an efficient block-iterative structure. A numerical application is presented.
翻译:神经网络已成为解决信号和图像处理问题无处不在的工具,而且往往优于标准方法。然而,培训神经网络在许多应用中是一项艰巨的任务。普遍的培训程序包括最大限度地减少基于巨大层面数据集的高度非曲线目标。在这方面,目前的方法不能保证产生全球解决方案。我们提出了一个替代方法,放弃优化框架,采用变式不平等形式主义。相关的算法保证循环与变式不平等的真正解决方案相融合,并拥有高效的区块图结构。提出了数字应用。