All-in-one image restoration (AIR) aims to address diverse degradations within a unified model by leveraging informative degradation conditions to guide the restoration process. However, existing methods often rely on implicitly learned priors, which may entangle feature representations and hinder performance in complex or unseen scenarios. Histogram of Oriented Gradients (HOG) as a classical gradient representation, we observe that it has strong discriminative capability across diverse degradations, making it a powerful and interpretable prior for AIR. Based on this insight, we propose HOGformer, a Transformer-based model that integrates learnable HOG features for degradation-aware restoration. The core of HOGformer is a Dynamic HOG-aware Self-Attention (DHOGSA) mechanism, which adaptively models long-range spatial dependencies conditioned on degradation-specific cues encoded by HOG descriptors. To further adapt the heterogeneity of degradations in AIR, we propose a Dynamic Interaction Feed-Forward (DIFF) module that facilitates channel-spatial interactions, enabling robust feature transformation under diverse degradations. Besides, we propose a HOG loss to explicitly enhance structural fidelity and edge sharpness. Extensive experiments on a variety of benchmarks, including adverse weather and natural degradations, demonstrate that HOGformer achieves state-of-the-art performance and generalizes well to complex real-world scenarios.Code is available at https://github.com/Fire-friend/HOGformer.
翻译:一体化图像修复(AIR)旨在通过利用信息丰富的退化条件来指导修复过程,从而在一个统一模型中处理多种退化问题。然而,现有方法通常依赖于隐式学习的先验,这可能导致特征表示纠缠,并在复杂或未见场景中阻碍性能。方向梯度直方图(HOG)作为一种经典的梯度表示,我们观察到其对多种退化具有强大的判别能力,使其成为AIR中一种强大且可解释的先验。基于这一洞察,我们提出了HOGformer,一种基于Transformer的模型,它集成了可学习的HOG特征以实现退化感知的修复。HOGformer的核心是动态HOG感知自注意力(DHOGSA)机制,该机制基于HOG描述符编码的退化特定线索,自适应地建模长程空间依赖关系。为了进一步适应AIR中退化的异质性,我们提出了动态交互前馈(DIFF)模块,该模块促进通道-空间交互,从而在多种退化下实现鲁棒的特征变换。此外,我们提出了HOG损失函数,以显式增强结构保真度和边缘锐度。在包括恶劣天气和自然退化在内的多种基准测试上进行的大量实验表明,HOGformer实现了最先进的性能,并能很好地泛化到复杂的现实世界场景。代码可在https://github.com/Fire-friend/HOGformer获取。