Object detection in Ultra High-Resolution (UHR) images has long been a challenging problem in computer vision due to the varying scales of the targeted objects. When it comes to barcode detection, resizing UHR input images to smaller sizes often leads to the loss of pertinent information, while processing them directly is highly inefficient and computationally expensive. In this paper, we propose using semantic segmentation to achieve a fast and accurate detection of barcodes of various scales in UHR images. Our pipeline involves a modified Region Proposal Network (RPN) on images of size greater than 10k$\times$10k and a newly proposed Y-Net segmentation network, followed by a post-processing workflow for fitting a bounding box around each segmented barcode mask. The end-to-end system has a latency of 16 milliseconds, which is $2.5\times$ faster than YOLOv4 and $5.9\times$ faster than Mask R-CNN. In terms of accuracy, our method outperforms YOLOv4 and Mask R-CNN by a $mAP$ of 5.5% and 47.1% respectively, on a synthetic dataset. We have made available the generated synthetic barcode dataset and its code at http://www.github.com/viplabB/SBD/.
翻译:在Ultra High分辨率(UHR)图像中检测超高分辨率(UHR)图像长期以来一直是计算机视觉中一个具有挑战性的问题,因为目标对象的规模不同。在条形码检测方面,将UHR输入图像调整成较小尺寸,往往导致相关信息丢失,而直接处理这些图像则效率极低,计算成本高昂。在本文中,我们提议使用语义分割法,以快速和准确地检测UHR图像中各种比例的条形码。我们的管道涉及修改区域建议网络,其尺寸大于10k$/time10k的图像,以及新提议的Y-Net分割网络,随后是安装每个分块条码遮罩周围一个捆绑盒的后处理工作流程。端对端系统有16毫秒的延度,比YOLOv4和5.9美元更快。在精确度方面,我们的方法超出了YOLOv4和Make R-CNN的尺寸,以5.5%和47.1%的美元制作了一个合成数据卡码。