Reinforcement learning is increasingly finding success across domains where the problem can be represented as a Markov decision process. Evolutionary computation algorithms have also proven successful in this domain, exhibiting similar performance to the generally more complex reinforcement learning. Whilst there exist many open-source reinforcement learning and evolutionary computation libraries, no publicly available library combines the two approaches for enhanced comparison, cooperation, or visualization. To this end, we have created Pearl (https://github.com/LondonNode/Pearl), an open source Python library designed to allow researchers to rapidly and conveniently perform optimized reinforcement learning, evolutionary computation and combinations of the two. The key features within Pearl include: modular and expandable components, opinionated module settings, Tensorboard integration, custom callbacks and comprehensive visualizations.
翻译:强化学习日益在各领域取得成功,这些问题可以作为Markov的决策过程。进化计算算法也证明在这一领域取得了成功,表现出与一般更为复杂的强化学习相似的绩效。虽然有许多开放源码强化学习和进化计算图书馆,但没有可供公众查阅的图书馆将两种方法结合起来,以加强比较、合作或可视化。为此,我们创建了Pearl(https://github.com/LondonNode/Pearl),这是一个开放源码Python图书馆,旨在让研究人员能够迅速和方便地进行优化强化学习、进化计算和两者的组合。Pearl公司的主要特征包括:模块和扩展组件、意见化模块设置、Tensorboard集成、自定义回调和全面直观化。