Edge computing operates between the cloud and end users and strives to provide low-latency computing services for simultaneous users. Redundant use of multiple edge nodes can reduce latency, as edge systems often operate in uncertain environments. However, since edge systems have limited computing and storage resources, directing more resources to some computing jobs will either block the execution of others or pass their execution to the cloud, thus increasing latency. This paper uses the average system computing time and blocking probability to evaluate edge system performance and analyzes the optimal resource allocation accordingly. We also propose blocking probability and average system time optimization algorithms. Simulation results show that both algorithms significantly outperform the benchmark for different service time distributions and show how the optimal replication factor changes with varying parameters of the system.
翻译:暂无翻译