This work studies the $K$-user broadcast channel with $\Lambda$ caches, when the association between users and caches is random, i.e., for the scenario where each user can appear within the coverage area of -- and subsequently is assisted by -- a specific cache based on a given probability distribution. Caches are subject to a cumulative memory constraint that is equal to $t$ times the size of the library. We provide a scheme that consists of three phases: the storage allocation phase, the content placement phase, and the delivery phase, and show that an optimized storage allocation across the caches together with a modified uncoded cache placement and delivery strategy alleviates the adverse effect of cache-load imbalance by significantly reducing the multiplicative performance deterioration due to randomness. In a nutshell, our work provides a scheme that manages to substantially mitigate the impact of cache-load imbalance in stochastic networks, as well as -- compared to the best-known state-of-the-art -- the well-known subpacketization bottleneck by showing its applicability in deterministic settings for which it achieves the same delivery time -- which was proven to be close to optimal for bounded values of $t$ -- with an exponential reduction in the subpacketization.
翻译:这项工作研究以$Lambda 缓存为单位的 $K 用户广播频道, 当用户和缓存之间的关联是随机的时, 即每个用户都可以出现在一个基于特定概率分布的特定缓存范围内 -- -- 并随后得到基于特定概率分布的特定缓存的假设情况 -- -- 某个特定缓存。 Caches 的累积内存限制相当于图书馆大小的美元倍数。 我们提供了一个由三个阶段组成的方案: 储存分配阶段、 内容放置阶段 和交付阶段, 并表明, 用户和缓存之间的优化储存分配, 加上一个修改过的未编码缓存放置和交付战略, 将缓存负荷不平衡的不利效应减轻, 大大降低因随机性导致的倍增性性性差。 在一个坚果中, 我们的工作提供了一种计划, 能够大大减轻缓存- 堆积量不平衡对图书馆规模的影响, 与最知名的状态- 众所周知的子包装瓶化瓶, 显示其在确定性环境中的可适用性能达到同一交付时间 -- -- 已证明最接近于指数。