Cloud-based Deep Neural Network (DNN) applications that make latency-sensitive inference are becoming an indispensable part of Industry 4.0. Due to the multi-tenancy and resource heterogeneity, both inherent to the cloud computing environments, the inference time of DNN-based applications are stochastic. Such stochasticity, if not captured, can potentially lead to low Quality of Service (QoS) or even a disaster in critical sectors, such as Oil and Gas industry. To make Industry 4.0 robust, solution architects and researchers need to understand the behavior of DNN-based applications and capture the stochasticity exists in their inference times. Accordingly, in this study, we provide a descriptive analysis of the inference time from two perspectives. First, we perform an application-centric analysis and statistically model the execution time of four categorically different DNN applications on both Amazon and Chameleon clouds. Second, we take a resource-centric approach and analyze a rate-based metric in form of Million Instruction Per Second (MIPS) for heterogeneous machines in the cloud. This non-parametric modeling, achieved via Jackknife and Bootstrap re-sampling methods, provides the confidence interval of MIPS for heterogeneous cloud machines. The findings of this research can be helpful for researchers and cloud solution architects to develop solutions that are robust against the stochastic nature of the inference time of DNN applications in the cloud and can offer a higher QoS to their users and avoid unintended outcomes.
翻译:由于云计算环境所固有的多密度和资源差异性,基于 DNN 应用的推断时间是随机的。这种随机性,如果不加以捕捉,可能会导致服务质量低(QOS),甚至是石油和天然气工业等关键部门的灾难。要使工业4.0强,解决方案设计师和研究人员需要了解基于 DNN 应用的更高度,并捕捉出其交错时间中的高度数据。因此,在本研究中,我们从两个角度对基于 DNN 应用的推论时间进行描述性分析。首先,我们进行应用中心分析和统计性模型,对亚马逊和查米伦云等关键部门的四种截然不同的 DNN 应用进行执行时间。第二,我们采取资源中心方法,并以百万分级指令/第二(MIPS)的形式分析用于云中兼容性应用的高度数据用户的行为,并捕捉到在云中具有更高度数据的机器的更高级度数据性数据。这种非偏差性数据化的计算方法可以用来在云层和云中提供更精确的模型模型。