Subspace optimization methods have the attractive property of reducing large-scale optimization problems to a sequence of low-dimensional subspace optimization problems. However, existing subspace optimization frameworks adopt a fixed update policy of the subspace, and therefore, appear to be sub-optimal. In this paper we propose a new \emph{Meta Subspace Optimization} (MSO) framework for large-scale optimization problems, which allows to determine the subspace matrix at each optimization iteration. In order to remain invariant to the optimization problem's dimension, we design an efficient meta optimizer based on very low-dimensional subspace optimization coefficients, inducing a rule-based agent that can significantly improve performance. Finally, we design and analyze a reinforcement learning procedure based on the subspace optimization dynamics whose learnt policies outperform existing subspace optimization methods.
翻译:子空间优化方法具有将大型优化问题降低到一系列低维次空间优化问题的吸引力属性。 但是,现有的子空间优化框架对子空间采取了固定的更新政策,因此似乎是次最佳的。 在本文中,我们提出了一个新的大型优化问题框架 : emph{Meta 子空间优化} (MSO), 从而可以在每次优化迭代中确定子空间矩阵。 为了保持与优化问题维度不相容,我们根据非常低维次空间优化系数设计一个高效的元优化器,从而产生一种基于规则的动力,能够显著改进性能。 最后,我们设计并分析一个基于亚空间优化动态的强化学习程序,该动力所学的政策优于现有的亚空间优化方法。