Detecting transparent objects in natural scenes is challenging due to the low contrast in texture, brightness and colors. Recent deep-learning-based works reveal that it is effective to leverage boundaries for transparent object detection (TOD). However, these methods usually encounter boundary-related imbalance problem, leading to limited generation capability. Detailly, a kind of boundaries in the background, which share the same characteristics with boundaries of transparent objects but have much smaller amounts, usually hurt the performance. To conquer the boundary-related imbalance problem, we propose a novel content-dependent data augmentation method termed FakeMix. Considering collecting these trouble-maker boundaries in the background is hard without corresponding annotations, we elaborately generate them by appending the boundaries of transparent objects from other samples into the current image during training, which adjusts the data space and improves the generalization of the models. Further, we present AdaptiveASPP, an enhanced version of ASPP, that can capture multi-scale and cross-modality features dynamically. Extensive experiments demonstrate that our methods clearly outperform the state-of-the-art methods. We also show that our approach can also transfer well on related tasks, in which the model meets similar troubles, such as mirror detection, glass detection, and camouflaged object detection. Code will be made publicly available.
翻译:在自然场景中检测透明天体具有挑战性,因为质地、亮度和颜色的对比度较低。最近的深层学习工程显示,在透明天体探测(TOD)方面,利用边界来利用边界是有效的。然而,这些方法通常会遇到与边界有关的不平衡问题,导致生成能力有限。详细来说,背景中的一种边界,具有与透明天体边界相同的特征,但数量小得多,通常会损害性能。为了克服与边界有关的不平衡问题,我们提议一种新型的内容依赖数据增强方法,称为FakeMix。考虑到在背景中收集这些麻烦制造者的边界十分困难,没有相应的说明,我们通过在培训期间将其他样本中透明天体的边界附在目前的形象中,从而调整数据空间,改进模型的概括性。此外,我们介绍适应性ASPP,一个强化版,能够动态地捕捉多种规模和跨模式的特征。广泛的实验表明,我们的方法显然超越了状态方法。我们还展示了我们的方法,通过将其他样本的透明物体的边界来生成这些边界,我们的方法也可以在现有的图像中将相关探测和变现成模型。