We propose a new approach, Synthetic Optimized Layout with Instance Detection (SOLID), to pretrain object detectors with synthetic images. Our "SOLID" approach consists of two main components: (1) generating synthetic images using a collection of unlabelled 3D models with optimized scene arrangement; (2) pretraining an object detector on "instance detection" task - given a query image depicting an object, detecting all instances of the exact same object in a target image. Our approach does not need any semantic labels for pretraining and allows the use of arbitrary, diverse 3D models. Experiments on COCO show that with optimized data generation and a proper pretraining task, synthetic data can be highly effective data for pretraining object detectors. In particular, pretraining on rendered images achieves performance competitive with pretraining on real images while using significantly less computing resources. Code is available at https://github.com/princeton-vl/SOLID.
翻译:我们提出一种新的方法,即 " 合成优化布局 " (SOLID),用合成图像对物体探测器进行预先培训。我们的 " SOLID " 方法由两个主要部分组成:(1) 利用一套无标签的三维模型和优化的场景安排生成合成图像;(2) 预先训练一个“强化探测”任务对象探测器,给一个显示物体的查询图像,在目标图像中探测同一物体的所有实例。我们的方法不需要为训练前设置任何语义标签,并允许使用任意的、多样化的3D模型。COCO实验显示,通过优化数据生成和适当的培训前任务,合成数据可以成为训练前物体探测器的高度有效数据。特别是,对成像的预培训,在使用大量计算资源的同时,通过对真实图像进行预先培训,实现性能竞争。代码可在https://github.com/princent-vl/SOLID查阅。