End-to-end paradigms significantly improve the accuracy of various deep-learning-based computer vision models. To this end, tasks like object detection have been upgraded by replacing non-end-to-end components, such as removing non-maximum suppression by training with a set loss based on bipartite matching. However, such an upgrade is not applicable to instance segmentation, due to its significantly higher output dimensions compared to object detection. In this paper, we propose an instance segmentation Transformer, termed ISTR, which is the first end-to-end framework of its kind. ISTR predicts low-dimensional mask embeddings, and matches them with ground truth mask embeddings for the set loss. Besides, ISTR concurrently conducts detection and segmentation with a recurrent refinement strategy, which provides a new way to achieve instance segmentation compared to the existing top-down and bottom-up frameworks. Benefiting from the proposed end-to-end mechanism, ISTR demonstrates state-of-the-art performance even with approximation-based suboptimal embeddings. Specifically, ISTR obtains a 46.8/38.6 box/mask AP using ResNet50-FPN, and a 48.1/39.9 box/mask AP using ResNet101-FPN, on the MS COCO dataset. Quantitative and qualitative results reveal the promising potential of ISTR as a solid baseline for instance-level recognition. Code has been made available at: https://github.com/hujiecpp/ISTR.
翻译:端到端模式显著提高了基于深层次学习的各种计算机愿景模型的准确性。 为此,目标检测等任务已经通过替换非端到端的组件来升级,例如通过培训消除非最大抑制,同时根据双面匹配设定损失。然而,这种升级并不适用于实例分割,因为其产出层面比目标检测高得多。在本文件中,我们建议采用一个实例分割变异器,称为 ISTRA,这是同类的首个端到端框架。 ISTRA预测了低维掩码嵌入,并将其与设定损失的地面真相掩埋相匹配。此外, ISTRA还同时通过经常性的完善战略进行检测和分割,这为与现有的上下和下调框架相比实现实例分割提供了新的途径。 ISTRA从拟议的端到端机制中受益。 ISTRA展示了“最新状态”的性能,即使是基于近效的子端对端框架嵌入。 具体来说, ISTRA将一个46.8/38.6框/ma AP 与RNet50-NFS-S-S-S-SQ 的定性数据基础确认。