Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) bounded and, also in concrete numerical simulations, divergence of the GD process may slow down, or even completely rule out, convergence of the error function. In practical relevant learning problems, it thus seems to be advisable to design the ANN architectures in a way so that GD optimization processes remain bounded. The property of the boundedness of GD processes for a given learning problem seems, however, to be closely related to the existence of minimizers in the optimization landscape and, in particular, GD trajectories may escape to infinity if the infimum of the error function (objective function) is not attained in the optimization landscape. This naturally raises the question of the existence of minimizers in the optimization landscape and, in the situation of shallow residual ANNs with multi-dimensional input layers and multi-dimensional hidden layers with the ReLU activation, the main result of this work answers this question affirmatively for a general class of loss functions and all continuous target functions. In our proof of this statement, we propose a kind of closure of the search space, where the limits are called generalized responses, and, thereafter, we provide sufficient criteria for the loss function and the underlying probability distribution which ensure that all additional artificial generalized responses are suboptimal which finally allows us to conclude the existence of minimizers in the optimization landscape.
翻译:在基于梯度下降(GD)的算法中,许多基于梯度下降(GD)的数学趋同结果采用以下假设:GD进程(几乎是肯定的)受约束,在具体的数字模拟中,GD进程的差异可能减缓,甚至完全排除差错功能的趋同。在实际相关的学习问题中,设计ANN结构似乎可取,这样可以使GD优化进程保持约束。但是,对于某个特定学习问题,GD进程的趋同性特性似乎与在优化景观中存在最小化因素(几乎是肯定的)密切相关,特别是在具体的数字模拟中,如果差错功能(目标功能)的最小化因素没有在优化景观中实现,GDD进程的差异可能会变得无限。在设计ANNE结构时,以某种方式使ANNE最小化因素与多维输入层和多维隐藏层相结合的方式保持。然而,这项工作的主要结果似乎与优化因素密切相关,特别是在一般损失功能和所有连续目标功能中,如果错误功能(目标功能)的最小化作用(客观功能)没有在优化环境中实现,这自然提出最大限度的最小化,那么,那么,我们最后就有可能结束。</s>