Agents can base decisions made using reinforcement learning (RL) on a reward function. The selection of values for the learning algorithm parameters can, nevertheless, have a substantial impact on the overall learning process. In order to discover values for the learning parameters that are close to optimal, we extended our previously proposed genetic algorithm-based Deep Deterministic Policy Gradient and Hindsight Experience Replay approach (referred to as GA+DDPG+HER) in this study. On the robotic manipulation tasks of FetchReach, FetchSlide, FetchPush, FetchPick&Place, and DoorOpening, we applied the GA+DDPG+HER methodology. Our technique GA+DDPG+HER was also used in the AuboReach environment with a few adjustments. Our experimental analysis demonstrates that our method produces performance that is noticeably better and occurs faster than the original algorithm. We also offer proof that GA+DDPG+HER beat the current approaches. The final results support our assertion and offer sufficient proof that automating the parameter tuning procedure is crucial and does cut down learning time by as much as 57%.
翻译:使用强化学习( RL) 做出决策的代理商可以使用奖赏功能。 然而, 选择学习算法参数的值可以对整个学习过程产生重大影响。 为了发现接近最佳的学习参数值, 我们扩展了先前提议的基于遗传算法的深确定性政策梯度和重见体验重放方法( 称为 GA+DPG+HER) 。 关于FetchReach、 FetchSlide、 FetchPush、 FetchPick & Place 和 Door Openning 的机器人操纵任务, 我们应用了 GA+DDPG+HER 方法。 我们的GA+DDPG+HER 技术也在AuboReach 环境中使用, 进行了一些调整。 我们的实验分析表明, 我们的方法比原始算法要好得多, 并且比原始算法更快。 我们还提供了GA+DDPG+HGHERHER 胜过当前方法的证据。 最后的结果支持了我们的主张, 并提供足够证据, 证明参数自动调整程序十分关键, 并且确实将学习时间缩短了57%。