As the number of edge devices with computing resources (e.g., embedded GPUs, mobile phones, and laptops) increases, recent studies demonstrate that it can be beneficial to collaboratively run convolutional neural network (CNN) inference on more than one edge device. However, these studies make strong assumptions on the devices' conditions, and their application is far from practical. In this work, we propose a general method, called DistrEdge, to provide CNN inference distribution strategies in environments with multiple IoT edge devices. By addressing heterogeneity in devices, network conditions, and nonlinear characters of CNN computation, DistrEdge is adaptive to a wide range of cases (e.g., with different network conditions, various device types) using deep reinforcement learning technology. We utilize the latest embedded AI computing devices (e.g., NVIDIA Jetson products) to construct cases of heterogeneous devices' types in the experiment. Based on our evaluations, DistrEdge can properly adjust the distribution strategy according to the devices' computing characters and the network conditions. It achieves 1.1 to 3x speedup compared to state-of-the-art methods.
翻译:随着计算机资源(例如嵌入式GPU、移动电话和膝上型计算机)的边缘装置数量的增加,最近的研究表明,合作运行进化神经网络(CNN)对不止一个边缘装置的推断是有好处的,然而,这些研究对这些装置的条件作了强有力的假设,其应用远非实用性。在这项工作中,我们提出了一个一般方法,称为DriskEdge,在多个IoT边缘装置的环境中提供CNN的推论分布策略。通过处理设备、网络条件和CNN计算的非线性字符的异质,DristedEdge能够利用深度强化学习技术适应范围广泛的情况(例如,网络条件不同,各种装置类型)。我们使用最新的嵌入式AI计算装置(例如NVIDIA Jetson产品)来在实验中构建各种不同装置的病例。根据我们的评估,Dridge能够根据装置的计算字符和网络条件适当调整分配策略。它实现了1.1至3x速度,与状态相比,它实现了1.1至3x速度方法。