We study matrix-matrix multiplication of two matrices, $A$ and $B$, each of size $n \times n$. This operation results in a matrix $C$ of size $n\times n$. Our goal is to produce $C$ as efficiently as possible given a cache: a 1-D limited set of data values that we can work with to perform elementary operations (additions, multiplications, etc.). That is, we attempt to reuse the maximum amount of data from $A$, $B$ and $C$ during our computation (or equivalently, utilize data in the fast-access cache as often as possible). Firstly, we introduce the matrix-matrix multiplication algorithm. Secondly, we present a standard two-memory model to simulate the architecture of a computer, and we explain the LRU (Least Recently Used) Cache policy (which is standard in most computers). Thirdly, we introduce a basic model Cache Simulator, which possesses an $\mathcal{O}(M)$ time complexity (meaning we are limited to small $M$ values). Then we discuss and model the LFU (Least Frequently Used) Cache policy and the explicit control cache policy. Finally, we introduce the main result of this paper, the $\mathcal{O}(1)$ Cache Simulator, and use it to compare, experimentally, the savings of time, energy, and communication incurred from the ideal cache-efficient algorithm for matrix-matrix multiplication. The Cache Simulator simulates the amount of data movement that occurs between the main memory and the cache of the computer. One of the findings of this project is that, in some cases, there is a significant discrepancy in communication values between an LRU cache algorithm and explicit cache control. We propose to alleviate this problem by ``tricking'' the LRU cache algorithm by updating the timestamp of the data we want to keep in cache (namely entries of matrix $C$). This enables us to have the benefits of an explicit cache policy while being constrained by the LRU paradigm (realistic policy on a CPU).
翻译:暂无翻译