We introduce Amazon-Berkeley Objects (ABO), a new large-scale dataset of product images and 3D models corresponding to real household objects. We use this realistic, object-centric 3D dataset to measure the domain gap for single-view 3D reconstruction networks trained on synthetic objects. We also use multi-view images from ABO to measure the robustness of state-of-the-art metric learning approaches to different camera viewpoints. Finally, leveraging the physically-based rendering materials in ABO, we perform single- and multi-view material estimation for a variety of complex, real-world geometries. The full dataset is available for download at https://amazon-berkeley-objects.s3.amazonaws.com/index.html.
翻译:我们引入了亚马逊-伯克利天体(ABO),这是产品图像和3D模型中与真实家用物体相对应的新的大规模产品数据集和3D模型。我们使用这个现实的、以物体为中心的3D数据集来衡量在合成物体方面受过训练的单一视野3D重建网络的域间差距。我们还使用亚马逊-伯凯利天体(ABO)的多视图图像来衡量对不同相机观点采用的最新计量学习方法的稳健性。最后,我们利用ABO的物理成像材料,对各种复杂、真实世界的地貌进行单视和多视图材料估计。完整的数据集可在https://amazon-berkeley-objects.s3.amazonaws.com/index.html上下载。