Edge devices like Nvidia Jetson platforms now offer several on-board accelerators -- including GPU CUDA cores, Tensor Cores, and Deep Learning Accelerators (DLA) -- which can be concurrently exploited to boost deep neural network (DNN) inferencing. In this paper, we extend previous work by evaluating the performance impacts of running multiple instances of the ResNet50 model concurrently across these heterogeneous components. We detail the effects of varying batch sizes and hardware combinations on throughput and latency. Our expanded analysis highlights not only the benefits of combining CUDA and Tensor Cores, but also the performance degradation from resource contention when integrating DLAs. These findings, together with insights on precision constraints and workload allocation challenges, motivate further exploration of intelligent scheduling mechanisms to optimize resource utilization on edge platforms.
翻译:暂无翻译