Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity. In this paper, we devise algorithms learning optimal tilt control policies from existing data (in the so-called passive learning setting) or from data actively generated by the algorithms (the active learning setting). We formalize the design of such algorithms as a Best Policy Identification (BPI) problem in Contextual Linear Multi-Arm Bandits (CL-MAB). An arm represents an antenna tilt update; the context captures current network conditions; the reward corresponds to an improvement of performance, mixing coverage and capacity; and the objective is to identify, with a given level of confidence, an approximately optimal policy (a function mapping the context to an arm with maximal reward). For CL-MAB in both active and passive learning settings, we derive information-theoretical lower bounds on the number of samples required by any algorithm returning an approximately optimal policy with a given level of certainty, and devise algorithms achieving these fundamental limits. We apply our algorithms to the Remote Electrical Tilt (RET) optimization problem in cellular networks, and show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.
翻译:在本文中,我们设计算法,从现有数据(所谓的被动学习环境)或从算法(积极的学习环境)积极产生的数据(积极学习环境)中学习最佳倾斜控制政策;我们正式设计这种算法,作为内幕线性多亚强盗(CL-MAB)中的最佳政策识别(BPI)问题;一个手臂代表着天线倾斜更新;环境捕捉到当前的网络条件;奖励与性能、混合覆盖面和能力改善相对应;目标是在一定信任度下,确定一种大约最佳的政策(一种功能,用最大报酬将环境映射到一个手臂上);对于CL-MAB来说,在主动和被动学习环境中,我们对任何算法所需的样本数量进行信息理论-理论下限,以某种程度的确定性能回归,并设计达到这些基本限制的算法。我们将我们的算法应用于远程电气技术(RET)优化基于细胞网络的问题,并表明它们能够利用比现有标准少得多的数据样本来产生最佳的造价政策。