In real-world scenarios, it may not always be possible to collect hundreds of labeled samples per class for training deep learning-based SAR Automatic Target Recognition (ATR) models. This work specifically tackles the few-shot SAR ATR problem, where only a handful of labeled samples may be available to support the task of interest. Our approach is composed of two stages. In the first, a global representation model is trained via self-supervised learning on a large pool of diverse and unlabeled SAR data. In the second stage, the global model is used as a fixed feature extractor and a classifier is trained to partition the feature space given the few-shot support samples, while simultaneously being calibrated to detect anomalous inputs. Unlike competing approaches which require a pristine labeled dataset for pretraining via meta-learning, our approach learns highly transferable features from unlabeled data that have little-to-no relation to the downstream task. We evaluate our method in standard and extended MSTAR operating conditions and find it to achieve high accuracy and robust out-of-distribution detection in many different few-shot settings. Our results are particularly significant because they show the merit of a global model approach to SAR ATR, which makes minimal assumptions, and provides many axes for extendability.
翻译:在真实场景下,可能无法收集每个类别数百个标记样本来训练基于深度学习的SAR自动目标识别(ATR)模型。本研究专门解决了几何形状目标识别少样本问题,即只有一小撮标记样本可用于支持所需识别任务。我们的方法由两个阶段组成。首先,通过自监督学习在大量不同的未标记SAR数据上训练全局表示模型。第二阶段,全局模型用作固定特征提取器,训练分类器以基于少数支持样本来划分特征空间,同时进行校准以检测异常输入。与需要精确标记数据集来进行元学习预训练的竞争方法不同,我们的方法从与下游任务几乎无关的未标记数据中学习高度可转移的特征。我们在标准和扩展MSTAR操作条件下进行了评估,并发现它在许多不同的少样本设置中实现了高精度和强大的超出分布检测能力。我们的结果特别重要,因为它们展示了全局模型方法对SAR ATR的优点,它作出了最小假设,并提供了许多可延伸的维度。