In this paper, we build on advances introduced by the Deep Q-Networks (DQN) approach to extend the multi-objective tabular Reinforcement Learning (RL) algorithm W-learning to large state spaces. W-learning algorithm can naturally solve the competition between multiple single policies in multi-objective environments. However, the tabular version does not scale well to environments with large state spaces. To address this issue, we replace underlying Q-tables with DQN, and propose an addition of W-Networks, as a replacement for tabular weights (W) representations. We evaluate the resulting Deep W-Networks (DWN) approach in two widely-accepted multi-objective RL benchmarks: deep sea treasure and multi-objective mountain car. We show that DWN solves the competition between multiple policies while outperforming the baseline in the form of a DQN solution. Additionally, we demonstrate that the proposed algorithm can find the Pareto front in both tested environments.
翻译:在本文中,我们以深Q网络(DQN)方法推出的进展为基础,将多目标表格强化学习算法(RL)W-学习W-学习法(RL)推广到大型国家空间。W-学习算法可以自然地解决多重单一政策在多目标环境中的竞争问题。然而,表格版本在大型国家空间环境中的规模不高。为解决这一问题,我们用DQN取代了基础目录,并提议增加W-Networks,以取代表格重量(W)表示。我们用两个得到广泛接受的多目标RL基准(深海宝藏和多目标山车)来评估由此形成的深W-网络(DWN)方法。我们表明,DWN解决了多重政策之间的竞争,同时以DQN解决方案的形式超过了基线。此外,我们证明拟议的算法可以在两个经过测试的环境中找到Pareto。