Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, a biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an $\epsilon$ of the global optima after sampling $\mathcal{O}(\frac{M^4\sigma^2}{(1-\gamma)^8\epsilon^4})$ trajectories where $\gamma$ is the discount factor and $M$ is the number of the agents, thus achieving the same dependence on $\epsilon$ as the policy gradient algorithm for the standard reinforcement learning.
翻译:许多工程问题有多重目标,总体目标是优化这些目标的非线性功能。本文提出将多重长期目标的非线性组合功能最大化的问题。 提出了一个基于政策等级的无模型算法。 为了计算梯度的估计数,提出了偏差估计值。 显示拟议的算法在取样 $\ mathcal{O} (\frac{M4\ sigma%2 ⁇ (1-\ gamma))\\\8\epsilon4} ($\gamma$是折扣系数,$M$是代理数的轨迹, 从而对美元和美元作为标准增援学习的政策梯度算法具有同样的依赖性。