StarCraft II (SC2) poses a grand challenge for reinforcement learning (RL), of which the main difficulties include huge state space, varying action space, and a long time horizon. In this work, we investigate a set of RL techniques for the full-length game of StarCraft II. We investigate a hierarchical RL approach involving extracted macro-actions and a hierarchical architecture of neural networks. We investigate a curriculum transfer training procedure and train the agent on a single machine with 4 GPUs and 48 CPU threads. On a 64x64 map and using restrictive units, we achieve a win rate of 99% against the level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat models, we achieve a 93% win rate against the most difficult non-cheating level built-in AI (level-7). In this extended version of the paper, we improve our architecture to train the agent against the cheating level AIs and achieve the win rate against the level-8, level-9, and level-10 AIs as 96%, 97%, and 94%, respectively. Our codes are at https://github.com/liuruoze/HierNet-SC2. To provide a baseline referring the AlphaStar for our work as well as the research and open-source community, we reproduce a scaled-down version of it, mini-AlphaStar (mAS). The latest version of mAS is 1.07, which can be trained on the raw action space which has 564 actions. It is designed to run training on a single common machine, by making the hyper-parameters adjustable. We then compare our work with mAS using the same resources and show that our method is more effective. The codes of mini-AlphaStar are at https://github.com/liuruoze/mini-AlphaStar. We hope our study could shed some light on the future research of efficient reinforcement learning on SC2 and other large-scale games.
翻译:StarCraft II (SC2) 对强化学习提出了巨大的挑战( RL ) 。 其中主要的难题包括巨大的国家空间、不同的行动空间和很长的时间跨度。 在这项工作中,我们为StarCraft II 的全长游戏调查一套 RL 技术。 我们调查了涉及提取宏观动作和神经网络的等级结构的等级 RL 方法。 我们调查了课程转换培训程序,并用一个4 GPU 和 48 CPU 线的单一机器对代理进行了培训。 在64x64 的 Rielf 地图上和使用限制性单位,我们实现了99 % 的赢率。 通过课程学习算法和作战模型的混合,我们取得了93%的RL技术。 在本文的这一扩展版中,我们改进了代理器在欺骗级别上的培训,我们可以用9-9级和10级的AIs 分别用96%、97 % 和94 % 。 我们的代码在 AS-Star- Stal- AS com- redustrual Produstrual res redustrual ex redustrual ex ex ex ex redududustration the the the the the the redustrut the the the the fir list listrual listal strual listal strual ex redustrubal ex.