Due to edge device resource constraints and different characteristics of deep neural network (DNN) models, it is a big challenge to optimize DNN inference performance in terms of energy consumption and inference latency on edge devices. In addition to the dynamic voltage frequency scaling (DVFS) technique, the edge-cloud architecture provides a collaborative approach to efficient DNN inference. However, current edge-cloud collaborative inference methods have not optimized various compute resources on edge devices. Thus, we propose DVFO, a novel DVFS-enabled edge-cloud collaborative inference framework, which jointly optimize DVFS and offloading parameters via deep reinforcement learning (DRL). Specifically, DVFO automatically co-optimizes 1) CPU, GPU and memory frequencies of edge devices, and 2) feature maps to be offloaded to cloud servers. In addition, it leverages a thinking-while-moving concurrent mechanism to accelerate the DRL learning process, and a spatialchannel attention mechanism to extract DNN feature maps of secondary importance for workload offloading. This approach improves energy efficiency and inference latency for different DNN models under various edge-cloud network conditions. Experimental results on different datasets show that DVFO reduces the average energy consumption by 33% compared to state-of-the-art schemes. Moreover, DVFO achieves up to 54% end-to-end inference latency reduction.
翻译:暂无翻译