We study the problem of multi-access coded caching (MACC): a central server has $N$ files, $K$ ($K \leq N$) caches each of which stores $M$ out of the $N$ files, $K$ users each of which demands one out of the $N$ files, and each user accesses $z$ caches. The objective is to jointly design the placement, delivery, and user-to-cache association, to optimize the achievable rate. This problem has been extensively studied in the literature under the assumption that a user accesses only one cache. However, when a user accesses more caches, this problem has been studied only under the assumption that a user accesses $z$ consecutive caches with a cyclic wrap-around over the boundaries. A natural question is how other user-to-cache associations fare against the cyclic wrap-around user-to-cache association. A bipartite graph can describe a general user-to-cache association. We identify a class of bipartite graphs that, when used as a user-to-cache association, achieves either a lesser rate or a lesser subpacketization than all other existing MACC schemes using a cyclic wrap-around user-to-cache association. The placement and delivery strategy of our MACC scheme is constructed using a combinatorial structure called maximal cross resolvable design.
翻译:我们研究多存码缓存(MACC):一个中央服务器有1美元的文件,每个存储着1美元的文件,每个存储着1美元的文件,每个存储着1美元的文件,每个用户需要1美元的文件,每个用户存着1美元的文件,每个用户存取1美元的文件,每个用户存取1美元的文件。目标是联合设计放置、交付和用户对缓存联系,以优化可实现的比率。在假设用户只访问一个缓存的情况下,一个中央服务器在文献中进行了广泛的研究。然而,当用户访问更多的缓存时,这一问题只在以下假设下得到研究:用户访问连续存储着1美元的文件,每个用户需要1美元的文件,每个用户存取1美元,每个用户存取1美元,每个用户存取缓存的缓存为1美元。一个自然的问题是,其他用户对缓存协会如何对环环环绕用户对用户对缓存联系,以优化可实现的速率。一个双面图可以描述一个普通用户对缓存协会。我们找出了一组双面图,当用户对缓存更多的缓存时,当作为用户对缓存的缓存的缓存的缓存组合使用一个用户对用户的用户和缓存组合时,或者使用一个较低的用户对用户的递递递递的分流的分机结构,使用一个小比例,或者使用一个较低的交付的分压式的分压式的分压式的分压,使用一个小节制。