Arbitrary-scale super-resolution (ASSR) overcomes the limitation of traditional super-resolution (SR) methods that operate only at fixed scales (e.g., 4x), enabling a single model to handle arbitrary magnification. Most existing ASSR approaches rely on implicit neural representation (INR), but its regression-driven feature extraction and aggregation intrinsically limit the ability to synthesize fine details, leading to low realism. Recent diffusion-based realistic image super-resolution (Real-ISR) models leverage powerful pre-trained diffusion priors and show impressive results at the 4x setting. We observe that they can also achieve ASSR because the diffusion prior implicitly adapts to scale by encouraging high-realism generation. However, without explicit scale control, the diffusion process cannot be properly adjusted for different magnification levels, resulting in excessive hallucination or blurry outputs, especially under ultra-high scales. To address these issues, we propose OmniScaleSR, a diffusion-based realistic arbitrary-scale SR framework designed to achieve both high fidelity and high realism. We introduce explicit, diffusion-native scale control mechanisms that work synergistically with implicit scale adaptation, enabling scale-aware and content-aware modulation of the diffusion process. In addition, we incorporate multi-domain fidelity enhancement designs to further improve reconstruction accuracy. Extensive experiments on bicubic degradation benchmarks and real-world datasets show that OmniScaleSR surpasses state-of-the-art methods in both fidelity and perceptual realism, with particularly strong performance at large magnification factors. Code will be released at https://github.com/chaixinning/OmniScaleSR.
翻译:任意尺度超分辨率(ASSR)克服了传统超分辨率(SR)方法仅能在固定尺度(例如4倍)下操作的局限性,使得单一模型能够处理任意放大倍数。大多数现有的ASSR方法依赖于隐式神经表示(INR),但其回归驱动的特征提取与聚合本质上限制了合成精细细节的能力,导致真实感较低。近期基于扩散模型的逼真图像超分辨率(Real-ISR)模型利用强大的预训练扩散先验,在4倍设定下展现出令人印象深刻的效果。我们观察到,由于扩散先验通过鼓励高真实感生成而隐式地适应尺度,它们也能实现ASSR。然而,在没有显式尺度控制的情况下,扩散过程无法针对不同的放大级别进行适当调整,导致过度幻觉或模糊输出,尤其是在超高尺度下。为解决这些问题,我们提出了OmniScaleSR,一个基于扩散的逼真任意尺度超分辨率框架,旨在同时实现高保真度和高真实感。我们引入了显式的、与扩散原生兼容的尺度控制机制,该机制与隐式尺度适应协同工作,实现对扩散过程的尺度感知和内容感知调制。此外,我们整合了多领域保真度增强设计,以进一步提升重建精度。在双三次退化基准测试和真实世界数据集上的大量实验表明,OmniScaleSR在保真度和感知真实感方面均超越了现有最先进方法,并在大放大倍数下表现出特别强劲的性能。代码将在 https://github.com/chaixinning/OmniScaleSR 发布。