Generalized planning is concerned with the computation of general policies that solve multiple instances of a planning domain all at once. It has been recently shown that these policies can be computed in two steps: first, a suitable abstraction in the form of a qualitative numerical planning problem (QNP) is learned from sample plans, then the general policies are obtained from the learned QNP using a planner. In this work, we introduce an alternative approach for computing more expressive general policies which does not require sample plans or a QNP planner. The new formulation is very simple and can be cast in terms that are more standard in machine learning: a large but finite pool of features is defined from the predicates in the planning examples using a general grammar, and a small subset of features is sought for separating "good" from "bad" state transitions, and goals from non-goals. The problems of finding such a "separating surface" while labeling the transitions as "good" or "bad" are jointly addressed as a single combinatorial optimization problem expressed as a Weighted Max-SAT problem. The advantage of looking for the simplest policy in the given feature space that solves the given examples, possibly non-optimally, is that many domains have no general, compact policies that are optimal. The approach yields general policies for a number of benchmark domains.
翻译:总体规划涉及同时解决规划领域多重情况的一般政策的计算,最近显示,这些政策可以分为两个步骤:首先,从抽样计划中学习以定性数字规划问题(QNP)为形式的适当抽取,然后利用规划员从所学的QNP中获取一般政策。在这项工作中,我们采用另一种方法来计算不要求抽样计划或QNP规划员的更直观的一般政策,而不需要抽样计划或QNP规划员的更直观的一般政策。新的提法非常简单,在机器学习中可以采用更标准的措辞:从规划实例的上游中用一般语法界定大量但有限的特征集合,并寻求将“好”与“坏”状态过渡以及目标区分开来的一些特征。在将过渡标为“好”或“坏”时,找到这种“分解面”的问题被联合处理为单一组合优化问题,其表述方式在机器学习中更为标准:在规划示例中,利用一般语法从上游的示例中界定出大量但有限的特征组合组合组合组合,在设计中寻找简单政策的好处是将“好”的地格域域中找不到一个总基数。