Rain-by-snow weather removal is a specialized task in weather-degraded image restoration aiming to eliminate coexisting rain streaks and snow particles. In this paper, we propose RSFormer, an efficient and effective Transformer that addresses this challenge. Initially, we explore the proximity of convolution networks (ConvNets) and vision Transformers (ViTs) in hierarchical architectures and experimentally find they perform approximately at intra-stage feature learning. On this basis, we utilize a Transformer-like convolution block (TCB) that replaces the computationally expensive self-attention while preserving attention characteristics for adapting to input content. We also demonstrate that cross-stage progression is critical for performance improvement, and propose a global-local self-attention sampling mechanism (GLASM) that down-/up-samples features while capturing both global and local dependencies. Finally, we synthesize two novel rain-by-snow datasets, RSCityScape and RS100K, to evaluate our proposed RSFormer. Extensive experiments verify that RSFormer achieves the best trade-off between performance and time-consumption compared to other restoration methods. For instance, it outperforms Restormer with a 1.53% reduction in the number of parameters and a 15.6% reduction in inference time. Datasets, source code and pre-trained models are available at \url{https://github.com/chdwyb/RSFormer}.
翻译:雨雪天气去除是一种特殊的天气图像恢复任务,旨在消除共存的雨滴条纹和雪颗粒。在本文中,我们提出了RSFormer,一种有效且高效的Transformer,专门用于解决这一挑战。我们首先探索了卷积网络(ConvNets)和视觉Transformer(ViTs)在分层体系结构中的相似性,并在实验中发现它们的性能大致相当。因此,我们利用类似于Transformer的卷积块(TCB)来代替计算成本高昂的自注意力,同时保留关注特性以适应输入内容。我们还证明了跨阶段的进展对性能改进至关重要,并提出了一种全局-局部自注意力采样机制(GLASM),在捕捉全局和局部依赖性的同时对特征进行向下/向上采样。最后,我们合成了两个新的雨雪数据集(RSCityScape and RS100K),用于评估我们提出的RSFormer。广泛的实验验证了RSFormer相对于其他恢复方法在性能和时间消耗之间实现了最佳平衡。例如,它在参数数量上优于Restormer 1.53%,推理时间减少15.6%。数据集,源代码和预训练模型可在\url{https://github.com/chdwyb/RSFormer}上找到。