Prevalent state-of-the-art instance segmentation methods fall into a query-based scheme, in which instance masks are derived by querying the image feature using a set of instance-aware embeddings. In this work, we devise a new training framework that boosts query-based models through discriminative query embedding learning. It explores two essential properties, namely dataset-level uniqueness and transformation equivariance, of the relation between queries and instances. First, our algorithm uses the queries to retrieve the corresponding instances from the whole training dataset, instead of only searching within individual scenes. As querying instances across scenes is more challenging, the segmenters are forced to learn more discriminative queries for effective instance separation. Second, our algorithm encourages both image (instance) representations and queries to be equivariant against geometric transformations, leading to more robust, instance-query matching. On top of four famous, query-based models ($i.e.,$ CondInst, SOLOv2, SOTR, and Mask2Former), our training algorithm provides significant performance gains ($e.g.,$ +1.6 - 3.2 AP) on COCO dataset. In addition, our algorithm promotes the performance of SOLOv2 by 2.7 AP, on LVISv1 dataset.
翻译:首先,我们的算法使用查询从整个培训数据集中取回相应实例,而不是仅仅在单个场景中搜索。随着不同场面的查询事件更具挑战性,分层人被迫学习更具有歧视性的查询,以便有效地区分实例。第二,我们的算法鼓励两种图像( Incent) 表达和查询都具有同等性,以对抗几何变换,从而导致更强大的实例比对。除了四个著名的、基于查询的模式外,我们的算法还利用查询从整个培训数据集中检索相应实例(即,CondInst、SOLov2、SOTR和SMAsk2 Former),我们的培训算法提供了显著的绩效收益(例如,$+1.6,通过SOVS ASDA数据增加。