Today's intelligent applications can achieve high performance accuracy using machine learning (ML) techniques, such as deep neural networks (DNNs). Traditionally, in a remote DNN inference problem, an edge device transmits raw data to a remote node that performs the inference task. However, this may incur high transmission energy costs and puts data privacy at risk. In this paper, we propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes. The time-varying representation accounts for time-varying channels and can significantly reduce the total energy at the edge device while maintaining high accuracy (low loss). We implement our approach in an image classification task using the MNIST dataset, and the system environment is simulated as a trajectory navigation scenario to emulate different channel conditions. Numerical simulations show that our proposed solution results in minimal energy consumption and $CO_2$ emission compared to the considered baselines while exhibiting robust performance across different channel conditions and bandwidth regime choices.
翻译:使用深神经网络(DNN)等机器学习技术,今天的智能应用可以实现高性能精度。传统上,在远程 DNN 推断问题中,边缘装置将原始数据传输到执行推论任务的远程节点,但这可能带来高传输能源成本,使数据隐私面临风险。在本文中,我们建议采用一种技术,通过利用模型压缩和时间分解模型,将边缘和边缘节点分开,减少边缘设备的总能源费用。时间变化显示频道时间变换的频道,可以大大减少边缘设备的总能源量,同时保持高精确度(低损耗 ) 。我们使用MNIST数据集执行图像分类任务,并模拟系统环境作为模拟的轨迹导航情景,以模仿不同的频道条件。数字模拟表明,我们提议的解决方案与考虑的基线相比,能耗和排放成本为$CO-2美元,同时显示不同频道条件和带宽系统选择的强劲性能。