Recently, the Vision Transformer (ViT) has shown impressive performance on high-level and low-level vision tasks. In this paper, we propose a new ViT architecture, named Hybrid Local-Global Vision Transformer (HyLoG-ViT), for single image dehazing. The HyLoG-ViT block consists of two paths, the local ViT path and the global ViT path, which are used to capture local and global dependencies. The hybrid features are fused via convolution layers. As a result, the HyLoG-ViT reduces the computational complexity and introduces locality in the networks. Then, the HyLoG-ViT blocks are incorporated within our dehazing networks, which jointly learn the intrinsic image decomposition and image dehazing. Specifically, the network consists of one shared encoder and three decoders for reflectance prediction, shading prediction, and haze-free image generation. The tasks of reflectance and shading prediction can produce meaningful intermediate features that can serve as complementary features for haze-free image generation. To effectively aggregate the complementary features, we propose a complementary features selection module (CFSM) to select the useful ones for image dehazing. Extensive experiments on homogeneous, non-homogeneous, and nighttime dehazing tasks reveal that our proposed Transformer-based dehazing network can achieve comparable or even better performance than CNNs-based dehazing models.
翻译:最近,愿景变异器在高层次和低层次的愿景任务中表现出了令人印象深刻的业绩。在本文件中,我们提出了一个新的维特结构,名为地方-全球愿景混合变异器(HyLoG-ViT),用于单一图像脱色。HyloG-ViT块由两条路径组成,即地方ViT路径和全球ViT路径,用于捕捉地方和全球依赖性。混合特征通过相交层融合。因此,HyLoG-ViT可以减少计算复杂性并引入网络中的位置。然后,HyLoG-ViT块被纳入我们的脱色网络,共同学习内在图像变异和图像脱色。具体地说,这个网络包括一个共享的编码器和三个解色器,用于反映性预测、阴影预测和无烟色图像生成。反色和阴影预测的任务可以产生有意义的中间特征,作为无色图像生成的互补特征。然后,为了有效地整合互补特征,我们提出了一个互为互补的、可塑性、可变化的网络选择的模型,可以选择一个用于实现不塑性、不塑性图像的系统,从而实现不塑性图像。