With the continuous trend of data explosion, delivering packets from data servers to end users causes increased stress on both the fronthaul and backhaul traffic of mobile networks. To mitigate this problem, caching popular content closer to the end-users has emerged as an effective method for reducing network congestion and improving user experience. To find the optimal locations for content caching, many conventional approaches construct various mixed integer linear programming (MILP) models. However, such methods may fail to support online decision making due to the inherent curse of dimensionality. In this paper, a novel framework for proactive caching is proposed. This framework merges model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image. For parallel training and simple design purposes, the proposed MILP model is first decomposed into a number of sub-problems and, then, convolutional neural networks (CNNs) are trained to predict content caching locations of these sub-problems. Furthermore, since the MILP model decomposition neglects the internal effects among sub-problems, the CNNs' outputs have the risk to be infeasible solutions. Therefore, two algorithms are provided: the first uses predictions from CNNs as an extra constraint to reduce the number of decision variables; the second employs CNNs' outputs to accelerate local search. Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost compared to the MILP solution, which provides high quality decision making in real-time.
翻译:随着数据爆炸的持续趋势,从数据服务器向终端用户发送数据包会给移动网络的前部和后部交通造成更大的压力。为了缓解这一问题,将用户更接近终端用户的流行内容刻录成为减少网络拥堵和改善用户经验的有效方法。为了找到最佳内容缓冲地点,许多常规方法可以建立各种混合整形线性编程模式。然而,由于内在的维度诅咒,这些方法可能无法支持在线决策。在本文中,提出了一个预防性缓冲的新框架。这个框架将基于模型的优化与数据驱动技术合并,将优化问题转化为灰度图像。为了平行培训和简单设计的目的,拟议的MILP模式首先被分解成若干子问题,然后,通过革命性神经网络(CNNs)模型进行各种混合整流线性编程设计模型。此外,由于MILP的高调序模型忽视了次级问题的内部影响。这个框架将基于模型的优化与基于数据的技术合并为灰度图像。为了平行培训和简单设计的目的,拟议的MIL模型首先可以降低本地决定的递增成本计算结果。