Recent research endeavors have shown that combining neural radiance fields (NeRFs) with pre-trained diffusion models holds great potential for text-to-3D generation. However, a hurdle is that they often encounter guidance collapse when rendering multi-object scenes with relatively long sentences. Specifically, text-to-image diffusion models are inherently unconstrained, making them less competent to accurately associate object semantics with 3D structures. To address it, we propose a novel framework, dubbed CompoNeRF, to explicitly incorporates an editable 3D scene layout to provide effective guidance at the object (i.e., local) and scene (i.e., global) levels. Firstly, we interpret the multi-object text as an editable 3D scene layout containing multiple local NeRFs associated with the object-specific 3D boxes and text prompt. Then, we introduce a composition module to calibrate the latent features from local NeRFs, which surprisingly improves the view consistency across different local NeRFs. Lastly, we apply text guidance on global and local levels through their corresponding views to avoid guidance ambiguity. Additionally, NeRFs can be decomposed and cached for composing other scenes with fine-tuning. This way, our CompoNeRF allows for flexible scene editing and re-composition of trained local NeRFs into a new scene by manipulating the 3D layout or text prompt. Leveraging the open-source Stable Diffusion model, our CompoNeRF can generate faithful and editable text-to-3D results while opening a potential direction for text-guided multi-object composition via the editable 3D scene layout. Notably, our CompoNeRF can achieve at most 54% performance gain based on the CLIP score metric. Code is available at https://.
翻译:暂无翻译