Deep neural networks (DNNs) are widely used in autonomous driving due to their high accuracy for perception, decision, and control. In safety-critical systems like autonomous driving, executing tasks like sensing and perception in real-time is vital to the vehicle's safety, which requires the application's execution time to be predictable. However, non-negligible time variations are observed in DNN inference. Current DNN inference studies either ignore the time variation issue or rely on the scheduler to handle it. None of the current work explains the root causes of DNN inference time variations. Understanding the time variations of the DNN inference becomes a fundamental challenge in real-time scheduling for autonomous driving. In this work, we analyze the time variation in DNN inference in fine granularity from six perspectives: data, I/O, model, runtime, hardware, and end-to-end perception system. Six insights are derived in understanding the time variations for DNN inference.
翻译:深神经网络(DNN) 广泛用于自主驾驶, 因为它们对感知、决定和控制的精确度很高。 在自动驾驶等安全临界系统中, 实时执行感知和感知等任务对于车辆的安全至关重要, 这要求应用程序的执行时间可以预测。 然而, DNN 的推论中观察到了不可忽略的时间变异。 目前 DNN 的推论研究要么忽略时间变异问题,要么依靠调度器来处理。 目前的工作没有解释DNN 推论时间变变的根本原因。 了解DNN 推论的时间变异在自动驾驶的实时时间安排上成为一项基本挑战。 在这项工作中,我们从六种角度分析了DNN的微粒变变变时间: 数据、 I/ O、 模型、 运行时间、 硬件和 端对端感知系统。 在理解 DNN 推论的时间变时, 得出了六种洞见。