It is well known that different algorithms perform differently well on an instance of an algorithmic problem, motivating algorithm selection (AS): Given an instance of an algorithmic problem, which is the most suitable algorithm to solve it? As such, the AS problem has received considerable attention resulting in various approaches - many of which either solve a regression or ranking problem under the hood. Although both of these formulations yield very natural ways to tackle AS, they have considerable weaknesses. On the one hand, correctly predicting the performance of an algorithm on an instance is a sufficient, but not a necessary condition to produce a correct ranking over algorithms and in particular ranking the best algorithm first. On the other hand, classical ranking approaches often do not account for concrete performance values available in the training data, but only leverage rankings composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon foreSts - a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses. HARRIS' decisions are based on a forest model, whose trees are created based on splits optimized on a hybrid ranking and regression loss function. As our preliminary experimental study on ASLib shows, HARRIS improves over standard algorithm selection approaches on some scenarios showing that combining ranking and regression in trees is indeed promising for AS.
翻译:众所周知,不同的算法在算法问题、激励算法选择(AS)的事例中效果不同:鉴于一个算法问题的例子,这是解决它最合适的算法问题?因此,AS问题受到相当重视,导致采取各种办法——其中许多办法解决了倒退问题或头罩下的排名问题。虽然这两种配法产生处理AS的非常自然的方法,但它们有相当的弱点。一方面,正确预测一例算法的性能是一个充分的条件,但并不是产生一种正确的算法等级,特别是最优算法等级的必要条件。另一方面,典型的排序方法往往没有考虑到培训数据中提供的具体性能价值,而只考虑到由这种数据构成的杠杆等级。我们提议HARRIS-Complig-Compliging 和 Regresson for Emerst-St, 一种利用特殊森林的新算法选择法,结合两种方法的优点,同时减轻它们的弱点。HARRIS的决定基于一种森林模式,其树木是根据混合等级和倒退损失函数的分级。在混合排序和倒退损失功能上创造最佳树木。我们的初步实验性AISAR的排序是用来显示一些有希望的研算方法。