A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.
翻译:在许多现实应用中,使用强化学习(RL)的一个关键障碍是需要大量系统互动以学习良好的控制政策。从政策上和离线RL建议采用从历史数据中学习控制政策来减少与物理环境互动的次数,但是,它们的性能由于在控制器更新后在轨迹中缺乏探索和分布变化而受到影响。此外,大多数RL方法要求所有的国家都直接观察到轨迹,在许多环境中很难实现。为了克服这些挑战,我们建议一种轨迹生成算法,这种算法在适应性地产生新的轨迹,如同系统正在根据更新的控制政策运行和探索一样。受线形系统基本利玛的激励,假设有足够的吸引力,我们从历史轨迹轨迹的线状组合中产生轨迹。对于线形反馈控制,我们证明算法产生轨迹和准确分布的轨迹,因为它们是使用更新的控制政策从实际系统取样的。特别是,算法扩展到了没有直接观察到R型轨迹的系统。实验显示,拟议的方法大大降低了R样本需要的方法。