Infrared imagery enables temperature-based scene understanding using passive sensors, particularly under conditions of low visibility where traditional RGB imaging fails. Yet, developing downstream vision models for infrared applications is hindered by the scarcity of high-quality annotated data, due to the specialized expertise required for infrared annotation. While synthetic infrared image generation has the potential to accelerate model development by providing large-scale, diverse training data, training foundation-level generative diffusion models in the infrared domain has remained elusive due to limited datasets. In light of such data constraints, we explore an inference-time scaling approach using a domain-adapted CLIP-based verifier for enhanced infrared image generation quality. We adapt FLUX.1-dev, a state-of-the-art text-to-image diffusion model, to the infrared domain by finetuning it on a small sample of infrared images using parameter-efficient techniques. The trained verifier is then employed during inference to guide the diffusion sampling process toward higher quality infrared generations that better align with input text prompts. Empirically, we find that our approach leads to consistent improvements in generation quality, reducing FID scores on the KAIST Multispectral Pedestrian Detection Benchmark dataset by 10% compared to unguided baseline samples. Our results suggest that inference-time guidance offers a promising direction for bridging the domain gap in low-data infrared settings.
翻译:红外成像通过被动传感器实现基于温度的场景理解,尤其在传统RGB成像失效的低能见度条件下具有优势。然而,由于红外标注需要专业知识,高质量标注数据的稀缺阻碍了红外应用下游视觉模型的开发。尽管合成红外图像生成有望通过提供大规模、多样化的训练数据来加速模型开发,但由于数据集有限,在红外领域训练基础级生成扩散模型仍面临挑战。鉴于此类数据限制,我们探索了一种推理时缩放方法,使用基于域适应CLIP的验证器来提升红外图像生成质量。我们采用参数高效技术,在少量红外图像样本上对最先进的文本到图像扩散模型FLUX.1-dev进行微调,使其适应红外领域。训练后的验证器在推理过程中用于引导扩散采样过程,以生成更高质量、更符合输入文本提示的红外图像。实验表明,我们的方法在生成质量上带来了一致性提升,在KAIST多光谱行人检测基准数据集上,相较于无引导基线样本,FID分数降低了10%。我们的结果表明,推理时引导为在低数据红外场景中弥合领域差距提供了一条有前景的路径。