Various adverse weather conditions pose a significant challenge to autonomous driving (AD) street scene semantic understanding (segmentation). A common strategy is to minimize the disparity between images captured in clear and adverse weather conditions. However, this technique typically relies on utilizing clear image as a reference, which is challenging to obtain in practice. Furthermore, this method typically targets a single adverse condition, and thus perform poorly when confronting a mixture of multiple adverse weather conditions. To address these issues, we introduce a reference-free and Adverse weather-Immune scheme (called AdvImmu) that leverages the invariance of weather conditions over short periods (seconds). Specifically, AdvImmu includes three components: Locally Sequential Mechanism (LSM), Globally Shuffled Mechanism (GSM), and Unfolded Regularizers (URs). LSM leverages temporal correlations between adjacent frames to enhance model performance. GSM is proposed to shuffle LSM segments to prevent overfitting of temporal patterns. URs are the deep unfolding implementation of two proposed regularizers to penalize the model complexity to enhance across-weather generalization. In addition, to overcome the over-reliance on consecutive frame-wise annotations in the training of AdvImmu (typically unavailable in AD scenarios), we incorporate a foundation model named Segment Anything Model (SAM) to assist to annotate frames, and additionally propose a cluster algorithm (denoted as SBICAC) to surmount SAM's category-agnostic issue to generate pseudo-labels. Extensive experiments demonstrate that the proposed AdvImmu outperforms existing state-of-the-art methods by 88.56% in mean Intersection over Union (mIoU).
翻译:暂无翻译