Due to Synthetic Aperture Radar (SAR) imaging characteristics, SAR vehicle recognition faces the problem of extracting discriminative and robust target features from a small dataset. Deep learning has shown impressive performance on the MSTAR dataset. However, data bias in a small dataset, such as background correlation, impairs the causality of these methods, i.e., discriminative features contain target and background differences. Moreover, different operating conditions of SAR lead to target signatures and background clutter variations in imaging results. However, many deep learning-based methods only verify robustness to target or background variations in the current experimental setting. In this paper, we propose a novel domain alignment framework named Hierarchical Disentanglement-Alignment Network (HDANet) to enhance features' causality and robustness. Concisely, HDANet consists of three parts: The first part uses data augmentation to generate signature variations for domain alignment. The second part disentangles the target features through a multitask-assisted mask to prevent non-causal clutter from interfering with subsequent alignment and recognition. Thirdly, a contrastive loss is employed for domain alignment to extract robust target features, and the SimSiam structure is applied to mitigate conflicts between contrastive loss and feature discrimination. Finally, the proposed method shows high robustness across MSTAR's multiple target, sensor, and environment variants. Noteworthy, we add a new scene variant to verify the robustness to target and background variations. Moreover, the saliency map and Shapley value qualitatively and quantitatively demonstrate causality. Our code is available in \url{https://github.com/waterdisappear/SAR-ATR-HDANet}.
翻译:由于合成孔径雷达(Synthetic Aperture Radar,SAR)成像特性,SAR车辆识别面临从小数据集中提取具有差异性和鲁棒性的目标特征的问题。深度学习已经在MSTAR数据集上展现出了卓越的性能。但是,在小数据集中存在的数据偏差(例如背景相关性)损害了这些方法的因果关系,即,具有差异性的特征包含目标和背景差异。此外,SAR的不同操作条件导致成像结果中的目标签名和背景杂波变化。然而,许多基于深度学习的方法仅验证了在当前实验设置中对目标或背景变化的鲁棒性。在本文中,我们提出了一种名为层次解缠和对齐网络(Hierarchical Disentanglement-Alignment Network,HDANet)的新颖领域对齐框架,以增强特征的因果关系和鲁棒性。简言之,HDANet由三部分组成:第一部分使用数据增强生成签名变化进行域对齐。第二部分通过多任务辅助掩码解缠目标特征,以防止非因果性杂波干扰后续对齐和识别。第三,在对领域对齐的对比损失的运用下,提取鲁棒的目标特征,并应用SimSiam结构来减轻对比损失和特征鉴别之间的冲突。最后,所提出的方法在MSTAR的多个目标、传感器和环境变体中表现出高鲁棒性。值得注意的是,我们添加了一个新的场景变体,以验证对目标和背景变化的鲁棒性。此外,显著性图和Shapley值定性和定量地证明了因果关系。我们的代码可在\url{https://github.com/waterdisappear/SAR-ATR-HDANet}中获取。