This work presents a new way of exploiting non-uniform file popularity in coded caching networks. Focusing on a fully-connected fully-interfering wireless setting with multiple cache-enabled transmitters and receivers, we show how non-uniform file popularity can be used very efficiently to accelerate the impact of transmitter-side data redundancy on receiver-side coded caching. This approach is motivated by the recent discovery that, under any realistic file-size constraint, having content appear in multiple transmitters can in fact dramatically boost the speed-up factor attributed to coded caching. We formulate an optimization problem that exploits file popularity to optimize the placement of files at the transmitters. We then provide a proof that reduces significantly the variable search space, and propose a new search algorithm that solves the problem at hand. We also prove an analytical performance upper bound, which is in fact met by our algorithm in the regime of many receivers. Our work reflects the benefits of allocating higher cache redundancy to more popular files, but also reflects a law of diminishing returns where for example very popular files may in fact benefit from minimum redundancy. In the end, this work reveals that in the context of coded caching, employing multiple transmitters can be a catalyst in fully exploiting file popularity, as it avoids various asymmetry complications that appear when file popularity is used to alter the receiver-side cache placement.
翻译:这项工作展示了一种在编码缓存网络中利用非统一文件受访度的新方式。 聚焦于一个完全连接的完全互连的无线设置, 配有多个缓存启动器和接收器, 我们展示了如何高效地使用非统一文件受访度, 以加速发报方数据冗余对接收方编码缓存的影响。 这项工作的动机是最近发现, 在任何现实的文件规模限制下, 内容出现在多发报机中, 能够大大提升由编码缓存产生的加速因子。 我们开发了一个优化问题, 利用文件受访度优化文件在发报机的放置。 然后我们提供了一个证据, 大大缩小可变搜索空间, 并提出一个新的搜索算法, 解决手头的问题。 我们还证明了分析性能上限, 事实上, 由我们在许多接收器的系统中的算法所满足。 我们的工作反映了将更高的缓存冗余量分配给更受欢迎的文件的好处, 但也反映了一种降低回报率的法律, 比如, 非常受欢迎的文件可能从最小的冗余状态中受益。 在最后, 这项工作显示, 利用了多种缓存的缓存状态, 当它被利用的缓存状态时, 利用了多种缓存时, 利用了一种催化的缓存的缓存时, 的缓存状态, 当它成为了多重化的缓存, 利用了多重化的缓存, 利用了多重化的缓存, 利用了多重化的缓存 。