Sim-to-real transfer trains RL agents in the simulated environments and then deploys them in the real world. Sim-to-real transfer has been widely used in practice because it is often cheaper, safer and much faster to collect samples in simulation than in the real world. Despite the empirical success of the sim-to-real transfer, its theoretical foundation is much less understood. In this paper, we study the sim-to-real transfer in continuous domain with partial observations, where the simulated environments and real-world environments are modeled by linear quadratic Gaussian (LQG) systems. We show that a popular robust adversarial training algorithm is capable of learning a policy from the simulated environment that is competitive to the optimal policy in the real-world environment. To achieve our results, we design a new algorithm for infinite-horizon average-cost LQGs and establish a regret bound that depends on the intrinsic complexity of the model class. Our algorithm crucially relies on a novel history clipping scheme, which might be of independent interest.
翻译:模拟环境中的Sim- 真实转移火车代理器, 然后在真实世界中部署。 模拟到真实世界的传输在实践中被广泛使用, 因为模拟中采集样本往往比真实世界更便宜、更安全和更快。 尽管模拟到真实的传输取得了经验性的成功, 但其理论基础却远不那么为人所理解。 在本文中, 我们通过部分观测研究连续域的模拟到真实的传输, 模拟环境和真实世界环境是由线性二次夸斯环( LQG) 系统模拟的。 我们显示, 流行的强势对抗性培训算法能够从模拟环境中学习一种政策, 这种政策对于现实世界环境中的最佳政策具有竞争力。 为了实现我们的成果, 我们设计了一种无限平均成本LQG的新算法, 并建立起一个取决于模型类内在复杂性的遗憾界限。 我们的算法关键地依赖于一种新型的历史剪切方法, 这可能具有独立的兴趣。