Cloud computing has become inevitable for every digital service which has exponentially increased its usage. However, a tremendous surge in cloud resource demand stave off service availability resulting into outages, performance degradation, load imbalance, and excessive power-consumption. The existing approaches mainly attempt to address the problem by using multi-cloud and running multiple replicas of a virtual machine (VM) which accounts for high operational-cost. This paper proposes a Fault Tolerant Elastic Resource Management (FT-ERM) framework that addresses aforementioned problem from a different perspective by inducing high-availability in servers and VMs. Specifically, (1) an online failure predictor is developed to anticipate failure-prone VMs based on predicted resource contention; (2) the operational status of server is monitored with the help of power analyser, resource estimator and thermal analyser to identify any failure due to overloading and overheating of servers proactively; and (3) failure-prone VMs are assigned to proposed fault-tolerance unit composed of decision matrix and safe box to trigger VM migration and handle any outage beforehand while maintaining desired level of availability for cloud users. The proposed framework is evaluated and compared against state-of-the-arts by executing experiments using two real-world datasets. FT-ERM improved the availability of the services up to 34.47% and scales down VM-migration and power-consumption up to 88.6% and 62.4%, respectively over without FT-ERM approach.
翻译:云计算对于每个数字服务都变得不可避免,因为每个数字服务都大大增加了使用量。然而,云层资源需求大幅飙升,使服务可用性因断流、性能退化、负荷失衡和过度耗电而停止。现有办法主要试图通过使用多云和运行虚拟机器(VM)的多重复制软件来解决这一问题。本文提议了一个“放电弹性弹性资源管理(FT-ERM)”框架,从不同的角度解决上述问题,办法是促成服务器和VMs的高可用性。 具体地说,(1) 开发了一个在线故障预测器,以预测资源争议为基础预测易失灵的VMS;(2) 服务器的运行状况在电力分析器、资源估算器和热分析器的帮助下加以监测,以查明因服务器超负荷和过热而造成高操作成本过高的任何故障;(3) 易失灵性VMMS被指派给由决定矩阵和安全箱组成的拟议错误容忍单位,以触发VMM迁移和预先处理任何外值,同时保持云用户预期的可用性能水平;(2) 服务器的运行框架在不使用实时数据分析、资源估算和测试中,将ReFFM-rFM-47的级别上,并分别对照对州-RFFFFM-r-real-real-real-s-si-si-la-si-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-la-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax