Massive amounts of data are expected to be generated by the billions of objects that form the Internet of Things (IoT). A variety of automated services such as monitoring will largely depend on the use of different Machine Learning (ML) algorithms. Traditionally, ML models are processed by centralized cloud data centers, where IoT readings are offloaded to the cloud via multiple networking hops in the access, metro, and core layers. This approach will inevitably lead to excessive networking power consumptions as well as Quality-of-Service (QoS) degradation such as increased latency. Instead, in this paper, we propose a distributed ML approach where the processing can take place in intermediary devices such as IoT nodes and fog servers in addition to the cloud. We abstract the ML models into Virtual Service Requests (VSRs) to represent multiple interconnected layers of a Deep Neural Network (DNN). Using Mixed Integer Linear Programming (MILP), we design an optimization model that allocates the layers of a DNN in a Cloud/Fog Network (CFN) in an energy efficient way. We evaluate the impact of DNN input distribution on the performance of the CFN and compare the energy efficiency of this approach to the baseline where all layers of DNNs are processed in the centralized Cloud Data Center (CDC).
翻译:成百上千万个物体构成物联网(IoT),预计将产生大量数据。监测等各种自动化服务将主要取决于使用不同的机器学习算法。传统上,ML模型由中央云式数据中心处理,通过存取、地铁和核心层的多个网络跳向云中卸载IoT读数。这一方法将不可避免地导致网络电量消耗过大,以及服务质量下降,如升温。相反,在本文件中,我们建议采用分布式ML方法,在云层外的IoT节点和雾服务器等中间设备中进行处理。我们将ML模型抽入虚拟服务请求(VSRs),以代表深神经网络(DNN)多个相互关联的层。我们使用混合Integer线性规划(MILP),我们设计一个优化模型,以高效的能源方式分配云层/烟雾网络(CFNFN)的DNNS层。我们用中央化的能量计算方式将DNNS的能量传输中心(DNNS的能量输出中心)的中央化的能量传输效率。