Existing methods for 3D-aware image synthesis largely depend on the 3D pose distribution pre-estimated on the training set. An inaccurate estimation may mislead the model into learning faulty geometry. This work proposes PoF3D that frees generative radiance fields from the requirements of 3D pose priors. We first equip the generator with an efficient pose learner, which is able to infer a pose from a latent code, to approximate the underlying true pose distribution automatically. We then assign the discriminator a task to learn pose distribution under the supervision of the generator and to differentiate real and synthesized images with the predicted pose as the condition. The pose-free generator and the pose-aware discriminator are jointly trained in an adversarial manner. Extensive results on a couple of datasets confirm that the performance of our approach, regarding both image quality and geometry quality, is on par with state of the art. To our best knowledge, PoF3D demonstrates the feasibility of learning high-quality 3D-aware image synthesis without using 3D pose priors for the first time.
翻译:3D 3D 图像合成的现有方法主要取决于 3D 配置预估的3D 配置配置 。 不准确的估计可能会将模型误入学习错误的几何。 这项工作建议 POF3D 将基因分光场从 3D 要求的前题中解脱出来。 我们首先为生成器配备一个高效的造型学习器, 能够从潜值代码中推断出一个外形, 从而自动接近基本真实的面貌分布。 然后我们指派导师一项任务, 在生成器的监管下学习配置分布, 并区分真实和合成的图像与预测的外形作为条件。 无型生成器和外观识别器以对抗方式联合培训。 有关几套数据的广泛结果证实, 我们方法在图像质量和几何测量质量方面的表现与艺术的状态相当。 据我们所知, PoF3D 展示了在首次不使用 3D 之前的外观图像合成方法学习高质量 3D 3D 图像合成的可行性。