We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. Using a suitable reformulation of the given problem, our method combines the inexact restoration approach for constrained optimization with the trust-region procedure and random models. Differently from other recent stochastic trust-region schemes, our proposed algorithm improves feasibility and optimality in a modular way. We provide the expected number of iterations for reaching a near-stationary point by imposing some probability accuracy requirements on random functions and gradients which are, in general, less stringent than the corresponding ones in literature. We validate the proposed algorithm on some nonconvex optimization problems arising in binary classification and regression, showing that it performs well in terms of cost and accuracy, and allows to reduce the burdensome tuning of the hyper-parameters involved.
翻译:我们建议一种具有不精确功能和梯度评估的随机性第一等级信任区域方法,用于解决有限和最小化问题。我们采用对特定问题进行适当调整的方法,将限制优化的不精确恢复方法与信任区域程序和随机模型结合起来。不同于最近其他的随机信任区域办法,我们提议的算法以模块方式提高了可行性和最佳性。我们为达到接近静止点提供了预期的迭代数,对随机功能和梯度规定了一些概率准确性要求,一般而言,这些函数和梯度比文献中的相应函数和梯度严格。我们验证了在二进制分类和回归中产生的一些非电离子优化问题的拟议算法,表明它在成本和准确性方面表现良好,并能够减少所涉超参数的烦琐调整。