Improving learning efficiency is paramount for learning resource allocation with deep neural networks (DNNs) in wireless communications over highly dynamic environments. Incorporating domain knowledge into learning is a promising way of dealing with this issue, which is an emerging topic in the wireless community. In this article, we first briefly summarize two classes of approaches to using domain knowledge: introducing mathematical models or prior knowledge to deep learning. Then, we consider a kind of symmetric prior, permutation equivariance, which widely exists in wireless tasks. To explain how such a generic prior is harnessed to improve learning efficiency, we resort to ranking, which jointly sorts the input and output of a DNN. We use power allocation among subcarriers, probabilistic content caching, and interference coordination to illustrate the improvement of learning efficiency by exploiting the property. From the case study, we find that the required training samples to achieve given system performance decreases with the number of subcarriers or contents, owing to an interesting phenomenon: "sample hardening". Simulation results show that the training samples, the free parameters in DNNs and the training time can be reduced dramatically by harnessing the prior knowledge. The samples required to train a DNN after ranking can be reduced by $15 \sim 2,400$ folds to achieve the same system performance as the counterpart without using prior.
翻译:提高学习效率对于在高度动态环境中的无线通信中与深神经网络(DNNs)一起学习资源配置至关重要。将域知识纳入学习是解决这一问题的一个很有希望的方法,这是无线社区正在出现的一个专题。在本篇文章中,我们首先简要总结使用域知识的两大类方法:引入数学模型或先前知识深入学习。然后,我们考虑一种在无线任务中广泛存在的对称前变异等差异,以无线任务为基础。为了解释如何利用这种通用之前的通用方法来提高学习效率,我们采用将DNN的输入和输出结合起来的排序。我们利用分包机、概率内容缓冲和干扰协调来显示学习效率的提高。从案例研究中,我们发现需要的培训样本在系统性能下降时,由于一种有趣的现象:“模棱硬化 ”, 模拟结果显示培训样本、 DNNPs 的免费参数以及培训时间可以大幅降低,而无需利用前一级技术的升级来降低前一级技术。