A key challenge in satisficing planning is to use multiple heuristics within one heuristic search. An aggregation of multiple heuristic estimates, for example by taking the maximum, has the disadvantage that bad estimates of a single heuristic can negatively affect the whole search. Since the performance of a heuristic varies from instance to instance, approaches such as algorithm selection can be successfully applied. In addition, alternating between multiple heuristics during the search makes it possible to use all heuristics equally and improve performance. However, all these approaches ignore the internal search dynamics of a planning system, which can help to select the most useful heuristics for the current expansion step. We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system. Furthermore, we prove that this approach generalizes over existing approaches and that it can exponentially improve the performance of the heuristic search. To learn dynamic heuristic selection, we propose an approach based on reinforcement learning and show empirically that domain-wise learned policies, which take the internal search dynamics of a planning system into account, can exceed existing approaches.
翻译:讽刺科学规划中的一个关键挑战是在一次脂质搜索中使用多种脂质学。 将多种脂质估算汇总在一起, 例如通过采用最大值, 其缺点是, 单脂质估算的错误估计会对整个搜索产生消极影响。 由于一种脂质学的性能因实例不同而不同, 算法选择等方法可以成功应用。 此外, 搜索期间的多种脂质学交替可以平等地使用所有脂质学, 并改进性能。 但是, 所有这些方法都忽略了一个规划系统的内部搜索动态, 该系统有助于为当前扩张步骤选择最有用的脂质学。 我们显示, 动态算法配置可以用于动态的脂质选择, 同时考虑到一个规划系统的内部搜索动态动态动态动态动态。 此外, 我们证明, 这种方法可以对现有方法进行概括化, 并且可以指数化地改进超导力搜索的性能。 为了学习动态脂质选择, 我们建议一种基于强化学习的方法, 并用经验展示一种基于域学的策略, 它将考虑到一个规划系统的内部搜索动态动力, 能够超越现有方法。