In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next $k$ predicted page memory references to better approximate the optimal B\'el\'ady's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%.
翻译:在这项工作中,我们建议使用一个新的网页缓存替换算法,其逻辑是从观察到的内存访问请求中学来的,而不是像现有政策那样固定的。我们将页面请求预测问题作为绝对的时间序列预测任务来设计。然后,我们的方法询问学习的页面请求预报员如何获得下一个$k$的预测页内存参考,以更好地接近最佳的 B\'el\'ady的替换算法。我们运用先进的深深层次学习结构实施了若干预测技术,并将最优秀表现的算法纳入现有的开放源缓存模拟器。根据基准数据集进行的实验表明, 缓存比最佳的页替换超速(即准确的 LRU), 将缓存点击率提高1.9%, 将处理缓存误差所需的读数/写数减少18.4%和10.3%。