As generative models become increasingly capable of producing high-fidelity visual content, the demand for efficient, interpretable, and editable image representations has grown substantially. Recent advances in 2D Gaussian Splatting (2DGS) have emerged as a promising solution, offering explicit control, high interpretability, and real-time rendering capabilities (>1000 FPS). However, high-quality 2DGS typically requires post-optimization. Existing methods adopt random or heuristics (e.g., gradient maps), which are often insensitive to image complexity and lead to slow convergence (>10s). More recent approaches introduce learnable networks to predict initial Gaussian configurations, but at the cost of increased computational and architectural complexity. To bridge this gap, we present Fast-2DGS, a lightweight framework for efficient Gaussian image representation. Specifically, we introduce Deep Gaussian Prior, implemented as a conditional network to capture the spatial distribution of Gaussian primitives under different complexities. In addition, we propose an attribute regression network to predict dense Gaussian properties. Experiments demonstrate that this disentangled architecture achieves high-quality reconstruction in a single forward pass, followed by minimal fine-tuning. More importantly, our approach significantly reduces computational cost without compromising visual quality, bringing 2DGS closer to industry-ready deployment.
翻译:随着生成模型生成高保真视觉内容的能力日益增强,对高效、可解释且可编辑的图像表示需求大幅增长。二维高斯泼溅(2DGS)的最新进展成为一种有前景的解决方案,提供了显式控制、高可解释性和实时渲染能力(>1000 FPS)。然而,高质量的2DGS通常需要后优化。现有方法采用随机或启发式策略(如梯度图),往往对图像复杂度不敏感,导致收敛缓慢(>10秒)。更近期的研究引入可学习网络来预测初始高斯配置,但代价是增加了计算和架构复杂性。为弥补这一差距,我们提出了Fast-2DGS——一个用于高效高斯图像表示的轻量级框架。具体而言,我们引入了深度高斯先验,通过条件网络实现以捕捉不同复杂度下高斯基元的空间分布。此外,我们提出了属性回归网络来预测密集高斯属性。实验表明,这种解耦架构在单次前向传播中即可实现高质量重建,仅需极少量微调。更重要的是,我们的方法在保持视觉质量的同时显著降低了计算成本,使2DGS更接近工业级部署。