We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. At each iteration, the function and the gradient are approximated by sampling. The sample size in gradient approximations is smaller than the sample size in function approximations and the latter is determined using a deterministic rule inspired by the inexact restoration method, which allows the decrease of the sample size at some iterations. The trust-region step is then either accepted or rejected using a suitable merit function, which combines the function estimate with a measure of accuracy in the evaluation. We show that the proposed method eventually reaches full precision in evaluating the objective function and we provide a worst-case complexity result on the number of iterations required to achieve full precision. We validate the proposed algorithm on nonconvex binary classification problems showing good performance in terms of cost and accuracy and the important feature that a burdensome tuning of the parameters involved is not required.
翻译:我们建议一种具有不精确功能和梯度评价的随机第一端信任区域方法,用于解决最小化的有限和最小化问题。在每次迭代中,函数和梯度都以抽样为近似值。梯度近似值中的样本大小小于函数近似值的样本大小,而后者则使用不精确恢复方法所启发的确定性规则来确定,该规则允许在某些迭代中减少样本大小。然后,采用适当的功绩函数接受或拒绝信任区域步骤,该功能函数将函数估计与评价的精确度结合起来。我们表明,拟议方法最终在评价目标函数时达到完全精确度,我们对达到完全精确性所需的迭代数提供了最坏的复杂结果。我们验证了拟议的非同质二进分类问题算法,从成本和准确性方面看,表明不需要对所涉参数进行繁琐的调整的重要特征。