We study the fundamental online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g. machine-learned predictions) on an algorithm's decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.
翻译:我们在一个学习加速的环境下研究基本的在线 k- 服务器问题。 在传统的在线模型中, 算法没有关于请求序列的信息, 我们假设对算法的决定给出了某些建议( 例如机器学习预测 ) 。 但是, 预测的质量没有保证, 可能远不正确 。 我们的主要结果就是对线上 k- 服务器( Chrobak et al., SIDMA 1991) 的众所周知的双重覆盖算法进行学习和强化的变异。 在这种算法中, 我们把预测和我们的信任纳入到其质量中。 我们给出了一个基于错误的竞争性比对比, 这是一种用户定义的信任参数的函数, 并且能够顺利地在最佳的一致性、 性能是所有预测的正确性, 以及 不论预测质量如何, 我们的主要结果是学习了最强的。 当有了良好的预测, 我们改进了已知的在线算法的下限。 我们进一步显示我们的算法 任何几乎最优化的一致度- 和信任性能的竞争性的竞争性竞争比比比 。 我们的内级的高级的内程分析算算法 最终展示了我们不甚高的内存的内程。 。 。 它在一般的内程中, 我们的 的缩算算法中, 它的演算法中, 演算法最终的不甚。