In this paper, we study the global convergence of model-based and model-free policy gradient descent and natural policy gradient descent algorithms for linear quadratic deep structured teams. In such systems, agents are partitioned into a few sub-populations wherein the agents in each sub-population are coupled in the dynamics and cost function through a set of linear regressions of the states and actions of all agents. Every agent observes its local state and the linear regressions of states, called deep states. For a sufficiently small risk factor and/or sufficiently large population, we prove that model-based policy gradient methods globally converge to the optimal solution. Given an arbitrary number of agents, we develop model-free policy gradient and natural policy gradient algorithms for the special case of risk-neutral cost function. The proposed algorithms are scalable with respect to the number of agents due to the fact that the dimension of their policy space is independent of the number of agents in each sub-population. Simulations are provided to verify the theoretical results.
翻译:在本文中,我们研究了以模型为基础、不以模型为基础的政策梯度梯度下降和自然政策梯度下降算法对于线性二次深层结构化团队的全球趋同情况。在这种系统中,物剂被分成几个亚人口组,其中每个亚人口组的物剂通过一系列状态和所有物剂行动的线性回归,在动态和成本功能中相互结合。每个物剂都观察其当地状况和称为深层国家的线性回归情况。对于足够小的风险因素和/或足够大的人口组,我们证明以模型为基础的政策梯度方法全球趋同于最佳解决办法。根据任意数目的物剂,我们为风险中性成本功能的特殊情况制定无模型政策梯度和自然政策梯度算法。拟议的算法可适用于物剂数量,因为其政策空间的层面与每个亚人口组的物剂数量是独立的。我们提供了模拟,以核实理论结果。