Radar sensors are crucial for environment perception of driver assistance systems as well as autonomous vehicles. With a rising number of radar sensors and the so far unregulated automotive radar frequency band, mutual interference is inevitable and must be dealt with. Algorithms and models operating on radar data are required to run the early processing steps on specialized radar sensor hardware. This specialized hardware typically has strict resource-constraints, i.e. a low memory capacity and low computational power. Convolutional Neural Network (CNN)-based approaches for denoising and interference mitigation yield promising results for radar processing in terms of performance. Regarding resource-constraints, however, CNNs typically exceed the hardware's capacities by far. In this paper we investigate quantization techniques for CNN-based denoising and interference mitigation of radar signals. We analyze the quantization of (i) weights and (ii) activations of different CNN-based model architectures. This quantization results in reduced memory requirements for model storage and during inference. We compare models with fixed and learned bit-widths and contrast two different methodologies for training quantized CNNs, i.e. the straight-through gradient estimator and training distributions over discrete weights. We illustrate the importance of structurally small real-valued base models for quantization and show that learned bit-widths yield the smallest models. We achieve a memory reduction of around 80\% compared to the real-valued baseline. Due to practical reasons, however, we recommend the use of 8 bits for weights and activations, which results in models that require only 0.2 megabytes of memory.
翻译:这种专门硬件通常具有严格的资源限制,即记忆力低和计算能力低。以神经神经网络为基础的拆卸和干扰减缓方法在性能方面为雷达处理工作带来了大有希望的结果。但是,关于资源约束,CNN通常会超过硬件的能力。在本文中,我们调查基于CNN的拆卸和干扰雷达信号的缓解的早期处理步骤的量化技术。我们分析的是(一)重量的四分化和(二)基于CNN的不同模型结构结构的启动。这种四分化的结果是:模型存储和推断过程中的记忆要求减少。我们比较了固定和学习的Bitwith模型,对比了两种不同的硬件能力。我们研究的CNN在基于CNN的拆分解和干扰信号的早期处理方法中,我们研究了基于CNN的精度分解和干扰的早期减少技术。我们从80位模型到直流缩的深度分析结果,我们从实际的深度分析到直流缩的深度分析结果,我们从80位的深度模型到直径的深度分析到直径的直径的深度分析。