We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
翻译:我们提议在分布式计算等级结构上分布分布式的深神经网络(DDNN),由云、边缘(fog)和终端装置组成。虽然DNN能够容纳云中深神经网络(DNN)的推断,但DNN也可以使用边缘和终端装置中神经网络的浅部分进行快速和局部的推断;如果得到可缩放的分布式计算等级结构的支持,DNNN可以在神经网络规模上扩大,并在地理范围内扩大。由于分布式性质,DNNN可以增强DN应用程序的传感器聚合、系统过错容忍度和数据隐私。在实施DNNNN应用程序时,我们将DNN的部件绘制成分布式计算等级。通过联合培训这些部分,我们最大限度地减少对设备的通信和资源使用,并最大限度地发挥云层中所使用的提取特性的效用。当得到可缩放式计算机结构的支持时,DNNNN可以在神经传感器的地理多样性中扩大规模,以提高物体识别的准确性,降低通信成本。在进行实验时,与最传统的移动式感官的精度方法相比,通过高的频率处理降低20个传感器数据的成本。