Humans learn in two complementary ways: a slow, cumulative process that builds broad, general knowledge, and a fast, on-the-fly process that captures specific experiences. Existing deep-unfolding methods for spectral compressive imaging (SCI) mirror only the slow component-relying on heavy pre-training with many unfolding stages-yet they lack the rapid adaptation needed to handle new optical configurations. As a result, they falter on out-of-distribution cameras, especially in bespoke spectral setups unseen during training. This depth also incurs heavy computation and slow inference. To bridge this gap, we introduce SlowFast-SCI, a dual-speed framework seamlessly integrated into any deep unfolding network beyond SCI systems. During slow learning, we pre-train or reuse a priors-based backbone and distill it via imaging guidance into a compact fast-unfolding model. In the fast learning stage, lightweight adaptation modules are embedded within each block and trained self-supervised at test time via a dual-domain loss-without retraining the backbone. To the best of our knowledge, SlowFast-SCI is the first test-time adaptation-driven deep unfolding framework for efficient, self-adaptive spectral reconstruction. Its dual-stage design unites offline robustness with on-the-fly per-sample calibration-yielding over 70% reduction in parameters and FLOPs, up to 5.79 dB PSNR improvement on out-of-distribution data, preserved cross-domain adaptability, and a 4x faster adaptation speed. In addition, its modularity integrates with any deep-unfolding network, paving the way for self-adaptive, field-deployable imaging and expanded computational imaging modalities. The models, datasets, and code are available at https://github.com/XuanLu11/SlowFast-SCI.
翻译:人类通过两种互补的方式学习:一种是缓慢、累积的过程,用于构建广泛、通用的知识;另一种是快速、即时进行的过程,用于捕捉特定经验。现有的光谱压缩成像深度展开方法仅模拟了缓慢的组成部分——依赖于包含大量展开阶段的繁重预训练——但它们缺乏处理新光学配置所需的快速适应能力。因此,它们在分布外相机上表现不佳,尤其是在训练期间未见过的定制光谱设置中。这种深度设计也带来了沉重的计算负担和缓慢的推理速度。为了弥合这一差距,我们提出了SlowFast-SCI,这是一个双速度框架,可无缝集成到任何超越SCI系统的深度展开网络中。在慢速学习阶段,我们预训练或重用基于先验的主干网络,并通过成像指导将其蒸馏到一个紧凑的快速展开模型中。在快速学习阶段,轻量级的适应模块被嵌入到每个块中,并在测试时通过双域损失进行自监督训练,而无需重新训练主干网络。据我们所知,SlowFast-SCI是首个由测试时适应驱动的深度展开框架,用于高效、自适应的光谱重建。其双阶段设计将离线的鲁棒性与在线的逐样本校准相结合——实现了超过70%的参数和FLOPs减少,在分布外数据上获得高达5.79 dB的PSNR提升,保持了跨域适应性,并实现了4倍更快的适应速度。此外,其模块化设计可与任何深度展开网络集成,为自适应、可现场部署的成像以及扩展的计算成像模态铺平了道路。模型、数据集和代码可在 https://github.com/XuanLu11/SlowFast-SCI 获取。