We adapt recent tools developed for the analysis of Stochastic Gradient Descent (SGD) in non-convex optimization to obtain convergence and sample complexity guarantees for the vanilla policy gradient (PG). Our only assumptions are that the expected return is smooth w.r.t. the policy parameters, that its $H$-step truncated gradient is close to the exact gradient, and a certain ABC assumption. This assumption requires the second moment of the estimated gradient to be bounded by $A \geq 0$ times the suboptimality gap, $B \geq 0$ times the norm of the full batch gradient and an additive constant $C \geq 0$, or any combination of aforementioned. We show that the ABC assumption is more general than the commonly used assumptions on the policy space to prove convergence to a stationary point. We provide a single convergence theorem that recovers the $\widetilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity of PG. Our results also affords greater flexibility in the choice of hyper parameters such as the step size and places no restriction on the batch size $m$, including the single trajectory case (i.e., $m=1$). We then instantiate our theorem in different settings, where we both recover existing results and obtained improved sample complexity, e.g., for convergence to the global optimum for Fisher-non-degenerated parameterized policies.
翻译:我们调整了最近开发的工具,用于分析非精度优化的Stochacast Gradient Emple(SGD),以获得香草政策梯度(PG)的趋同性和样本复杂性保障。我们的唯一假设是,预期的回报是政策参数平滑,其美元分步脱轨梯度接近精确梯度,以及一定的ABC假设。这一假设要求在估计梯度的第二个时刻受美元=Geq 0美元乘以亚精度差的亚精度($B\geq 0)乘以亚精度(美元)的亚精度(美元)比全批量梯度和添加常量常量常量($C\geq 0美元)或上述任何组合的规范高出0.倍。我们表明,ABC假设比通常使用的政策空间假设更为笼统,以证明与一个固定点趋同。我们提供了一个单一的趋同性标点,以美元大度=(eqequalalizal)的样本复杂性。我们的结果在选择超量参数时也具有更大的灵活性,如步数大小和硬度1美元,然后不限制。