Recognizing the surrounding environment at low latency is critical in autonomous driving. In real-time environment, surrounding environment changes when processing is over. Current detection models are incapable of dealing with changes in the environment that occur after processing. Streaming perception is proposed to assess the latency and accuracy of real-time video perception. However, additional problems arise in real-world applications due to limited hardware resources, high temperatures, and other factors. In this study, we develop a model that can reflect processing delays in real time and produce the most reasonable results. By incorporating the proposed feature queue and feature select module, the system gains the ability to forecast specific time steps without any additional computational costs. Our method is tested on the Argoverse-HD dataset. It achieves higher performance than the current state-of-the-art methods(2022.12) in various environments when delayed . The code is available at https://github.com/danjos95/DADE
翻译:在实时环境中,处理过程结束时周围的环境变化; 目前的检测模型无法处理处理处理后的环境变化; 提出流动感知来评估实时视频感知的延时性和准确性; 然而,由于硬件资源有限、高温和其他因素,在现实世界应用中出现了更多的问题; 在这项研究中,我们开发了一个模型,能够反映实时处理的延误并产生最合理的结果; 通过纳入拟议的特征排队和特征选择模块,系统能够预测具体的时间步骤,而无需额外的计算成本。 我们的方法在Argovers-HD数据集上进行测试,在拖延的情况下,在各种环境中实现比目前最先进的方法(22.12)更高的性能。该代码可在https://github.com/danjos95/DADE查阅。