Recent research advances in salient object detection (SOD) could largely be attributed to ever-stronger multi-scale feature representation empowered by the deep learning technologies. The existing SOD deep models extract multi-scale features via the off-the-shelf encoders and combine them smartly via various delicate decoders. However, the kernel sizes in this commonly-used thread are usually "fixed". In our new experiments, we have observed that kernels of small size are preferable in scenarios containing tiny salient objects. In contrast, large kernel sizes could perform better for images with large salient objects. Inspired by this observation, we advocate the "dynamic" scale routing (as a brand-new idea) in this paper. It will result in a generic plug-in that could directly fit the existing feature backbone. This paper's key technical innovations are two-fold. First, instead of using the vanilla convolution with fixed kernel sizes for the encoder design, we propose the dynamic pyramid convolution (DPConv), which dynamically selects the best-suited kernel sizes w.r.t. the given input. Second, we provide a self-adaptive bidirectional decoder design to accommodate the DPConv-based encoder best. The most significant highlight is its capability of routing between feature scales and their dynamic collection, making the inference process scale-aware. As a result, this paper continues to enhance the current SOTA performance. Both the code and dataset are publicly available at https://github.com/wuzhenyubuaa/DPNet.
翻译:显性天体探测( SOD) 最新研究进展在很大程度上可以归因于由深层学习技术授权的日益强大的多级特征显示。 现有的 SOD 深层模型通过现成的编码器提取多级特征, 并通过各种微妙的解码器进行精密的结合。 但是, 这个常用线索的内核大小通常是“ 固定的 ” 。 在我们的新实验中, 我们观察到, 在包含微显性天体的情景中, 小型的内核是更可取的。 相反, 大型显性天体图像的内核体大小会更好表现。 在这种观察的启发下, 我们主张用“ 动态” 比例的路径( 作为全新的想法) 。 这将产生一个通用的插件大小, 直接适应现有功能骨干。 本文的关键技术创新通常为两面。 首先, 我们建议, 使用带有固定内心型内脏大小的香气卷变变, 动态金字节变( DP Convol), 以动态方式选择最佳的纸质显示当前内层内层的内脏变的内径, 将自动变的内脏变的内脏变, 数据继续显示其内部变。 将自动变。 将自动变。 将自动变。 向的内存的内存的内存的内存的内, 将自动变。