A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
翻译:学习增强的在线算法中的主要技术是组合多个算法或预测器。由于每个预测器的性能随时间变化,因此使用不同的预测器在不同时间跟随的动态组合比单个最佳预测器更加理想。我们设计了可以组合预测并在大类在线问题(即度量任务系统)上具有竞争力的算法。相对于最佳(事后)无约束组合,我们获得$O(\ell^2)$的竞争比率,并证明这是最好的。但是,对于在不同预测器之间略微限制变化次数的基准,我们可以获得$(1+\epsilon)$-竞争算法。此外,我们的算法可以调整为使用类似赌臂的方式访问预测器,一次只查询一个预测器。我们其中一个下界的意外推论是关于$k$-服务器问题的覆盖形式化的新结构性洞见。