Actors and critics in actor-critic reinforcement learning algorithms are functionally separate, yet they often use the same network architectures. This case study explores the performance impact of network sizes when considering actor and critic architectures independently. By relaxing the assumption of architectural symmetry, it is often possible for smaller actors to achieve comparable policy performance to their symmetric counterparts. Our experiments show up to 99% reduction in the number of network weights with an average reduction of 77% over multiple actor-critic algorithms on 9 independent tasks. Given that reducing actor complexity results in a direct reduction of run-time inference cost, we believe configurations of actors and critics are aspects of actor-critic design that deserve to be considered independently, particularly in resource-constrained applications or when deploying multiple actors simultaneously.
翻译:行为体强化学习算法的参与者和批评者在功能上是分开的,但他们往往使用相同的网络结构。本案例研究在独立考虑行为体和评论者架构时,探讨了网络规模的绩效影响。通过放松对建筑对称的假设,小行为体往往有可能实现与其对称对应方相似的政策性能。我们的实验显示,在9项独立任务上,网络重量减少高达99%,比多个行为体对立算法平均减少77%。鉴于降低行为体复杂性会直接降低运行时的推断成本,我们认为行为体和批评者的配置是行为体-批评设计中值得独立考虑的方面,特别是在资源限制的应用中或同时部署多个行为体时。