Path-following algorithms are frequently used in composite optimization problems where a series of subproblems, with varying regularization hyperparameters, are solved sequentially. By reusing the previous solutions as initialization, better convergence speeds have been observed numerically. This makes it a rather useful heuristic to speed up the execution of optimization algorithms in machine learning. We present a primal dual analysis of the path-following algorithm and explore how to design its hyperparameters as well as determining how accurately each subproblem should be solved to guarantee a linear convergence rate on a target problem. Furthermore, considering optimization with a sparsity-inducing penalty, we analyze the change of the active sets with respect to the regularization parameter. The latter can then be adaptively calibrated to finely determine the number of features that will be selected along the solution path. This leads to simple heuristics for calibrating hyperparameters of active set approaches to reduce their complexity and improve their execution time.
翻译:路径跟踪算法经常用于综合优化问题, 即一系列子问题, 且有不同的正规化超参数, 相继解决。 通过重新使用先前的解决方案作为初始化, 已经观察到了更好的趋同速度。 这使得在机器学习中加速执行优化算法是一个相当有用的杂务。 我们对路径跟踪算法进行初步的双重分析, 并探索如何设计超参数, 以及确定如何准确解决每个子问题, 以保证目标问题线性趋同率。 此外, 考虑用垃圾生成罚单优化, 我们分析关于常规化参数的活性组合的变化。 然后, 后者可以适应性调整, 以细微地确定在解决方案路径上选择的特征数量 。 这导致简单粗略地校准活性定方法的超参数, 以降低其复杂性, 并改进执行时间 。