We address the issue of creating consistent mesh texture maps captured from scenes without color calibration. We find that the method for aggregation of the multiple views is crucial for creating spatially consistent meshes without the need to explicitly optimize for spatial consistency. We compute a color prior from the cross-correlation of observable view faces and the faces per view to identify an optimal per-face color. We then use this color in a re-weighting ratio for the best-view texture, which is identified by prior mesh texturing work, to create a spatial consistent texture map. Despite our method not explicitly handling spatial consistency, our results show qualitatively more consistent results than other state-of-the-art techniques while being computationally more efficient. We evaluate on prior datasets and additionally Matterport3D showing qualitative improvements.
翻译:我们处理的是如何在没有颜色校准的情况下从场景中绘制一致的网状纹理图。 我们发现,对于在不需要明确优化空间一致性的情况下创建空间一致性的网状线条图,集多种观点的方法对于创建空间一致性的网状线条图至关重要。 我们从可观测视图面和面孔的交叉关系中先计算一个颜色,然后确定最佳的一面色。 然后我们用这种颜色来重新加权前网状纹理工作所确定的最佳视图纹理图。 尽管我们的方法没有明确处理空间一致性问题,但我们的结果在质量上比其他最先进的技术更一致,同时计算效率更高。 我们评估了先前的数据集和显示质量改进的更多实质3D。