In split machine learning (ML), different partitions of a neural network (NN) are executed by different computing nodes, requiring a large amount of communication cost. To ease communication burden, over-the-air computation (OAC) can efficiently implement all or part of the computation at the same time of communication. Based on the proposed system, the system implementation over wireless network is introduced and we provide the problem formulation. In particular, we show that the inter-layer connection in a NN of any size can be mathematically decomposed into a set of linear precoding and combining transformations over MIMO channels. Therefore, the precoding matrix at the transmitter and the combining matrix at the receiver of each MIMO link, as well as the channel matrix itself, can jointly serve as a fully connected layer of the NN. The generalization of the proposed scheme to the conventional NNs is also introduced. Finally, we extend the proposed scheme to the widely used convolutional neural networks and demonstrate its effectiveness under both the static and quasi-static memory channel conditions with comprehensive simulations. In such a split ML system, the precoding and combining matrices are regarded as trainable parameters, while MIMO channel matrix is regarded as unknown (implicit) parameters.
翻译:在分离机学习(ML)中,神经网络的不同分区由不同的计算节点执行,需要大量的通信费用。为了减轻通信负担,超空计算(OAC)可以在通信同时有效地实施所有或部分计算。根据拟议的系统,对无线网络实行系统,我们提供问题配方。特别是,我们显示,在任何大小的NNN中,任何大小的神经网络的跨层连接可以在数学上分解成一组线性预编码,并结合MIMO频道上的转换。因此,发射器的预编码矩阵和每个MIMO链接接收器的组合矩阵,以及频道矩阵本身,可以作为NNN的完全连接层。还引入了对无线网络的系统实施,并提供了问题配方。最后,我们将拟议的计划扩大到广泛使用的NNW网络,并在静态和准静态存储频道条件下展示其有效性。在这种分裂的 ML系统中,将前的和混合矩阵参数作为未知的矩阵,被视为可训练的矩阵。