Time-varying stochastic optimization problems frequently arise in machine learning practice (e.g. gradual domain shift, object tracking, strategic classification). Although most problems are solved in discrete time, the underlying process is often continuous in nature. We exploit this underlying continuity by developing predictor-corrector algorithms for time-varying stochastic optimizations. We provide error bounds for the iterates, both in presence of pure and noisy access to the queries from the relevant derivatives of the loss function. Furthermore, we show (theoretically and empirically in several examples) that our method outperforms non-predictor corrector methods that do not exploit the underlying continuous process.
翻译:机械学习实践(例如,渐进域转换、物体跟踪、战略分类)经常出现时间变化的随机优化问题。尽管大多数问题都是在离散时间中解决的,但基本过程往往具有连续性。我们利用这一内在连续性,为时间变化的随机优化开发预测器-校正算法。我们为迭代提供错误界限,同时可以纯粹和吵闹地查询损失函数相关衍生物的查询。此外,我们(在几个例子中)表明,我们的方法优于非主要纠正器的方法,没有利用基本的连续过程。