We adapt recent tools developed for the analysis of Stochastic Gradient Descent (SGD) in non-convex optimization to obtain convergence and sample complexity guarantees for the vanilla policy gradient (PG). Our only assumptions are that the expected return is smooth w.r.t. the policy parameters, that its $H$-step truncated gradient is close to the exact gradient, and a certain ABC assumption. This assumption requires the second moment of the estimated gradient to be bounded by $A\geq 0$ times the suboptimality gap, $B \geq 0$ times the norm of the full batch gradient and an additive constant $C \geq 0$, or any combination of aforementioned. We show that the ABC assumption is more general than the commonly used assumptions on the policy space to prove convergence to a stationary point. We provide a single convergence theorem that recovers the $\widetilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity of PG to a stationary point. Our results also affords greater flexibility in the choice of hyper parameters such as the step size and the batch size $m$, including the single trajectory case (i.e., $m=1$). When an additional relaxed weak gradient domination assumption is available, we establish a novel global optimum convergence theory of PG with $\widetilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity. We then instantiate our theorems in different settings, where we both recover existing results and obtain improved sample complexity, e.g., $\widetilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity for the convergence to the global optimum for Fisher-non-degenerated parametrized policies.
翻译:我们调整了最近开发的工具,用于分析非调心优化中的Stochacast Gradient Emple (SGD),以获得香草政策梯度(PG) 的趋同性和复杂性保障。 我们唯一的假设是,预期的回报是平滑的 政策参数, 其$H$的梯度脱轨梯度接近精确的梯度, 以及某种ABC假设。 这个假设要求在估计的梯度的第二个时刻, 以美元乘以低于最优化值的值( 美元), 美元=gq 美元, 以获得全批量梯度的值标准, 和添加值的值( eqq + 美元) 。 我们的计算结果在选择 超量递归正值时具有更大的灵活性, 包括一步数 和 美元 递增的 。