Action-value estimation is a critical component of many reinforcement learning (RL) methods whereby sample complexity relies heavily on how fast a good estimator for action value can be learned. By viewing this problem through the lens of representation learning, good representations of both state and action can facilitate action-value estimation. While advances in deep learning have seamlessly driven progress in learning state representations, given the specificity of the notion of agency to RL, little attention has been paid to learning action representations. We conjecture that leveraging the combinatorial structure of multi-dimensional action spaces is a key ingredient for learning good representations of action. To test this, we set forth the action hypergraph networks framework -- a class of functions for learning action representations in multi-dimensional discrete action spaces with a structural inductive bias. Using this framework we realise an agent class based on a combination with deep Q-networks, which we dub hypergraph Q-networks. We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and discretised physical control benchmarks.
翻译:行动价值估计是许多强化学习方法的一个关键组成部分,根据这种方法,抽样复杂性主要取决于如何快速地了解行动价值的良好估计者。通过从代表性学习的角度来看待这一问题,国家和行动的好表现可以促进行动价值估计。虽然深层次学习的进展在学习国家表现方面取得了无缝的进展,但鉴于机构概念对行动表现的特殊性,很少注意学习行动表现。我们推测,利用多维行动空间的组合结构是学习良好行动表现的一个关键要素。为了测试这一点,我们设置了行动高射线网络框架 -- -- 在多维分立行动空间学习行动表现的一种功能,带有结构性的内涵偏差。我们利用这一框架,在与深重Q网络相结合的基础上,实现了一个代理类。我们把高光谱Q网络称为Q网络。我们展示了我们方法在众多领域的有效性:在最小的汇合效应下,Atariri 2600游戏和离散物理控制基准下,说明性预测问题。