In an $\alpha$-way set-associative cache, the cache is partitioned into disjoint sets of size $\alpha$, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size $\alpha$ decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an $\alpha$-way set-associative LRU cache of total size $k$, as a function of $\alpha = \alpha(k)$. We prove the following, assuming that sets are selected using a fully random hash function: - For $\alpha = \omega(\log k)$, the paging cost of an $\alpha$-way set-associative LRU cache is within additive $O(1)$ of that a fully-associative LRU cache of size $(1-o(1))k$, with probability $1 - 1/\operatorname{poly}(k)$, for all request sequences of length $\operatorname{poly}(k)$. - For $\alpha = o(\log k)$, and for all $c = O(1)$ and $r = O(1)$, the paging cost of an $\alpha$-way set-associative LRU cache is not within a factor $c$ of that a fully-associative LRU cache of size $k/r$, for some request sequence of length $O(k^{1.01})$. - For $\alpha = \omega(\log k)$, if the hash function can be occasionally changed, the paging cost of an $\alpha$-way set-associative LRU cache is within a factor $1 + o(1)$ of that a fully-associative LRU cache of size $(1-o(1))k$, with probability $1 - 1/\operatorname{poly}(k)$, for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU).
翻译:在 $\alpha$ 路组相联缓存中,缓存被分为大小为 $\alpha$ 的不相交的集合,每个项目仅能缓存在一个集内,通常通过哈希函数选择集。相比完全关联缓存,集合关联缓存具有许多优点,例如延迟或并发性,但通常会导致更多的缓存未命中。随着集合大小 $\alpha$ 的减小,优点越大,但分页成本也会加剧。在本文中,我们将一个 $\alpha$-路组关联 LRU 缓存的性能,作为 $\alpha = \alpha(k)$ 的函数,其中总大小为 $k$。我们假设使用全随机哈希函数选择集,证明了以下结论:
- 对于 $\alpha = \omega(\log k)$, $\alpha$-路组相联 LRU 缓存的分页成本与大小为 $(1-o(1))k$ 的完全关联 LRU 缓存的分页成本,仅存在添加 $O(1)$,对于所有长度为 $\operatorname{poly}(k)$ 的请求序列,概率为 $1 - 1/\operatorname{poly}(k)$。
- 对于 $\alpha = o(\log k)$,对于所有 $c = O(1)$ 和 $r = O(1)$,对于长度为 $O(k^{1.01})$ 的请求序列,$\alpha$-路组相联 LRU 缓存的分页成本与大小为 $k/r$ 的完全关联 LRU 缓存的分页成本差异不超过 $c$ 倍。
- 对于 $\alpha = \omega(\log k)$,如果哈希函数可以偶尔更改,那么 $\alpha$-路组相联 LRU 缓存的分页成本与大小为 $(1-o(1))k$ 的完全关联 LRU 缓存的分页成本相差不超过 $1 + o(1)$ 倍,对于任意长度(例如超多项式)的请求序列。我们的一些结果推广到最少使用算法(LFU)之外的其他页面算法。