The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas. This paper proposes a new algorithmic toolbox for tackling this problem through the lens of optimistic online learning. We build upon the Follow-the-Regularized-Leader (FTRL) framework which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with fixed-size caches or elastic leased caches subject to time-average budget constraints. The predictions are provided by a content recommendation system that influences the users viewing activity, and hence can naturally reduce the caching network's uncertainty about future requests. We prove that the proposed optimistic learning caching policies can achieve sub-zero performance loss (regret) for perfect predictions, and maintain the best achievable regret bound $O(\sqrt T)$ even for arbitrary-bad predictions. The performance of the proposed algorithms is evaluated with detailed trace-driven numerical tests.
翻译:有效的在线缓存政策的设计对内容分发网络、在线社交网络和边际计算服务等领域来说是一个日益重要的问题。本文件提议了一个新的算法工具箱,通过乐观的在线学习透镜来解决这一问题。我们以在此进一步开发的“追踪(Regulalized-Leader)”框架为基础,将文件请求的预测包括在内,我们还为有固定大小缓存或弹性租赁缓存的双边网络设计在线缓存算法,但需受预算平均时间限制。通过内容建议系统对用户进行观察活动,从而自然减少缓存网络对未来请求的不确定性,我们证明拟议的乐观缓存政策可以实现完美预测的零分损效(regret),并保持最佳可实现的遗憾($O(sqrt T),即使用于任意的错误预测。对拟议算法的性能进行了详细微动数字测试。