We address the task of converting a floorplan and a set of associated photos of a residence into a textured 3D mesh model, a task which we call Plan2Scene. Our system 1) lifts a floorplan image to a 3D mesh model; 2) synthesizes surface textures based on the input photos; and 3) infers textures for unobserved surfaces using a graph neural network architecture. To train and evaluate our system we create indoor surface texture datasets, and augment a dataset of floorplans and photos from prior work with rectified surface crops and additional annotations. Our approach handles the challenge of producing tileable textures for dominant surfaces such as floors, walls, and ceilings from a sparse set of unaligned photos that only partially cover the residence. Qualitative and quantitative evaluations show that our system produces realistic 3D interior models, outperforming baseline approaches on a suite of texture quality metrics and as measured by a holistic user study.
翻译:我们的任务是将住宅平面图和一套相关照片转换成3D网格模型,我们称之为Plan2Scene。我们的系统 1)将地面图象提升为3D网格模型;2)根据输入照片合成表面纹理;3)利用图形神经网络结构对未观测表面进行推论;训练和评价我们的系统,我们建立室内表面纹理数据集,并用纠正的地表作物和附加说明来增加先前工作中的地板图和照片数据集。我们的方法是处理从几组仅部分覆盖住宅的不相容照片中生成表层图象,如地板、墙壁和天花板等主要表面的可动纹理的挑战。定性和定量评估显示,我们的系统产生了现实的3D内部模型,在一套纯度质量指标上超过了一套衡量的基线方法,并且由整体用户研究测量。