Structure-from-Motion is a technology used to obtain scene structure through image collection, which is a fundamental problem in computer vision. For unordered Internet images, SfM is very slow due to the lack of prior knowledge about image overlap. For sequential images, knowing the large overlap between adjacent frames, SfM can adopt a variety of acceleration strategies, which are only applicable to sequential data. To further improve the reconstruction efficiency and break the gap of strategies between these two kinds of data, this paper presents an efficient covisibility-based incremental SfM. Different from previous methods, we exploit covisibility and registration dependency to describe the image connection which is suitable to any kind of data. Based on this general image connection, we propose a unified framework to efficiently reconstruct sequential images, unordered images, and the mixture of these two. Experiments on the unordered images and mixed data verify the effectiveness of the proposed method, which is three times faster than the state of the art on feature matching, and an order of magnitude faster on reconstruction without sacrificing the accuracy. The source code is publicly available at https://github.com/openxrlab/xrsfm
翻译:结构自运动是一种通过图像收集获取现场结构的技术,这是计算机视觉的一个基本问题。对于未定序的互联网图像,SfM由于缺乏先前对图像重叠的了解而非常缓慢。对于相继图像,SfM知道相邻框架之间的大量重叠,可以采取各种加速战略,这些战略仅适用于相继数据。为了进一步提高重建效率并打破这两类数据之间的战略差距,本文件展示了一种高效的基于可视性的递增SfM。与以往方法不同的是,我们利用可视性和注册依赖性来描述适合任何类型数据的图像连接。基于这一一般图像连接,我们提出了一个统一框架,以高效地重建相继图像、无定序图像和这两组图像的混合物。对未定序图像的实验和混合数据核查拟议方法的有效性,该方法比特征匹配的艺术状态快三倍,而重建的强度则更快。源代码在https://github.com/openxrlab/xrsmfsm上公开提供。