In this paper, we investigate the transmission delay of cache-aided broadcast networks with user cooperation. Novel coded caching schemes are proposed for both centralized and decentralized caching settings, by efficiently exploiting time and cache resources and creating parallel data delivery at the server and users. We derive a lower bound on the transmission delay and show that the proposed centralized coded caching scheme is \emph{order-optimal} in the sense that it achieves a constant multiplicative gap within the lower bound. Our decentralized coded caching scheme is also order-optimal when each user's cache size is larger than the threshold $N(1-\sqrt[{K-1}]{ {1}/{(K+1)}})$ (approaching 0 as $K\to \infty$), where $K$ is the total number of users and $N$ is the size of file library. Moreover, for both the centralized and decentralized caching settings, our schemes obtain an additional \emph{cooperation gain} offered by user cooperation and an additional \emph{parallel gain} offered by the parallel transmission among the server and users. It is shown that in order to reduce the transmission delay, the number of users parallelly sending signals should be appropriately chosen according to user's cache size, and alway letting more users parallelly send information could cause high transmission delay.
翻译:在本文中, 我们与用户合作, 调查缓存辅助广播网络的传输延迟。 通过高效利用时间和缓存资源, 并在服务器和用户中创建平行数据传输, 为中央和分散的缓存设置提出了新编码缓存计划 。 我们从传输延迟中获得较低的约束, 并显示拟议的中央编码缓存计划是 \ emph{order- optimal}, 因为它在较低约束范围内实现了不断的多复制差距 。 我们的分散编码缓存计划在每一个用户的缓存规模大于阈值$N(1-\ sqrt[K-1}{{{{(K+1)]}}{{{(K+1)}}}} $( 接近0美元作为 $K\to\ = inftymetime ) 时, 我们的计划也可以获得更多顺序优化的顺序, 用户合作提供的缓存范围大于阈值, 用户的递延缩速度应该减少用户的递增速度 。