Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting. This problem corresponds to the minimization of the sum of a (weakly) smooth function endowed with a stochastic first-order oracle, and a structured uniformly convex (possibly nonsmooth and non-Lipschitz) regularization term. Despite intensive work on closely related settings, prior to our work no complexity bounds for this problem were known. We close this gap by providing novel excess risk bounds, both in expectation and with high probability. Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems. We conclude by providing numerical results comparing our methods to the state of the art.
翻译:在统计和机器学习的正规化技术的启发下,我们研究在随机环境中的互补复合最小化,这个问题相当于最大限度地减少一个(薄弱的)顺畅功能的总和,该功能具有一个随机的第一阶,和一个结构一致的统称(可能是非湿润和非Lipschitz)的正规化术语。尽管在工作之前对密切相关的设置做了大量工作,但还没有发现这一问题的复杂性界限。我们通过提供新的超重风险界限来弥补这一差距,无论是在预期中还是可能性很大。我们的算法几乎是最佳的,我们通过新的更低复杂性界限来证明这一点。我们最后通过提供数字结果,将我们的方法与艺术状态进行比较。