Existing fiducial markers solutions are designed for efficient detection and decoding, however, their ability to stand out in natural environments is difficult to infer from relatively limited analysis. Furthermore, worsening performance in challenging image capture scenarios - such as poor exposure, motion blur, and off-axis viewing - sheds light on their limitations. E2ETag introduces an end-to-end trainable method for designing fiducial markers and a complimentary detector. By introducing back-propagatable marker augmentation and superimposition into training, the method learns to generate markers that can be detected and classified in challenging real-world environments using a fully convolutional detector network. Results demonstrate that E2ETag outperforms existing methods in ideal conditions and performs much better in the presence of motion blur, contrast fluctuations, noise, and off-axis viewing angles. Source code and trained models are available at https://github.com/jbpeace/E2ETag.
翻译:现有的纤维标记解决方案是为了有效探测和解码,但是,很难从相对有限的分析推断它们在自然环境中显露出来的能力,此外,在具有挑战性的图像捕获情景中,如暴露程度低、运动模糊和轴外观等,其性能越来越差,这说明了其局限性。E2ETag为设计纤维标记和辅助检测器引入了端到端的训练方法。通过在培训中引入反向可传播标记增强和叠加,该方法学会产生在具有挑战性的现实世界环境中使用完全动态探测器网络探测和分类的标记。结果显示,E2ETag在理想条件下超越了现有方法,在存在运动模糊、对比波动、噪音和轴外观的角度时表现得更好。源代码和经过培训的模型见https://github.com/jbpeace/E2ETag。