Multispectral pedestrian detection achieves better visibility in challenging conditions and thus is essential to autonomous driving, for which both the accuracy and computational cost are of paramount importance. Most existing approaches treat RGB and infrared modalities equally. They typically adopt two symmetrical backbones for multimodal feature extraction, which ignore the substantial differences between modalities and bring great difficulty for the reduction of the computational cost as well as effective crossmodal fusion. In this work, we propose a novel and efficient framework named Wavelet-context Cooperative Network (WCCNet) that is able to differentially extract complementary features of different spectra with lower computational complexity, and further fuse these diverse features based on their spatially relevant crossmodal semantics. In particular, WCCNet simultaneously explore wavelet context and RGB textures within a cooperative dual-stream backbone, which is composed of adaptive discrete wavelet transform (ADWT) layers and heavyweight neural layers. The ADWT layers extract frequency components for infrared modality, while neural layers handle RGB modality features. Since ADWT layers are lightweight and extract complementary features, this cooperative structure not only significantly reduces the computational complexity, but also facilitates the subsequent crossmodal fusion. To further fuse these infrared and RGB features with significant semantic differences, we elaborately design the crossmodal rearranging fusion module (CMRF), which can mitigate spatial misalignment and merge semantically complementary features in spatially-related local regions to amplify the crossmodal reciprocal information. Experimental results on KAIST and FLIR benchmarks indicate that WCCNet outperforms state-of-the-art methods with considerable efficiency and competitive accuracy.
翻译:暂无翻译