Existing 3D-from-2D generators are typically designed for well-curated single-category datasets, where all the objects have (approximately) the same scale, 3D location, and orientation, and the camera always points to the center of the scene. This makes them inapplicable to diverse, in-the-wild datasets of non-alignable scenes rendered from arbitrary camera poses. In this work, we develop a 3D generator with Generic Priors (3DGP): a 3D synthesis framework with more general assumptions about the training data, and show that it scales to very challenging datasets, like ImageNet. Our model is based on three new ideas. First, we incorporate an inaccurate off-the-shelf depth estimator into 3D GAN training via a special depth adaptation module to handle the imprecision. Then, we create a flexible camera model and a regularization strategy for it to learn its distribution parameters during training. Finally, we extend the recent ideas of transferring knowledge from pre-trained classifiers into GANs for patch-wise trained models by employing a simple distillation-based technique on top of the discriminator. It achieves more stable training than the existing methods and speeds up the convergence by at least 40%. We explore our model on four datasets: SDIP Dogs 256x256, SDIP Elephants 256x256, LSUN Horses 256x256, and ImageNet 256x256, and demonstrate that 3DGP outperforms the recent state-of-the-art in terms of both texture and geometry quality. Code and visualizations: https://snap-research.github.io/3dgp.
翻译:从 2D 生成的 3D 现有 3D 生成器通常设计为精密的单类数据集, 所有对象都具有( 约) 相同的规模、 3D 位置和方向, 以及相机总是指向场景中心。 这使得它们无法适用于由任意相机配置的不匹配场景的多样化的、 本地版数据集。 在此工作中, 我们开发了一个 3D 生成器, 配有通用前科( 3DGP) : 一个 3D 合成框架, 包含对培训数据的更一般性假设, 并显示它以非常具有挑战性的数据集( 如 图像Net) 。 我们的模型基于三个新想法。 首先, 我们通过一个特殊的深度适应模块模块, 将不准确的现底深处的测深处的测深点天线标值纳入 3D GAN 培训 。 然后, 我们为它创建了一个灵活的相机模型模型和常规化战略, 以学习它的分发参数。 最后, 我们扩展了将知识从预训练过的分类器转换成 GAN- 和训练过的模型的GAN- 类似模型, 比如的模型, 如图像25 联合国模型, 联合国模型基于三个新概念的模型; 我们的模型的模型的模型的模型基于三个版的精度的精度技术, 25 最新的精度的精度的精度的精度, 最新精度, 最新技术, 在导的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨的轨法, 。</s>