This paper presents our approach to intercepting a faster intruder UAV, inspired by the MBZIRC 2020 Challenge 1. By utilizing a priori knowledge of the shape of the intruder's trajectory, we can calculate an interception point. Target tracking is based on image processing by a YOLOv3 Tiny convolutional neural network, combined with depth calculation using a gimbal-mounted ZED Mini stereo camera. We use RGB and depth data from the camera, devising a noise-reducing histogram-filter to extract the target's 3D position. Obtained 3D measurements of target's position are used to calculate the position, orientation, and size of a figure-eight shaped trajectory, which we approximate using a Bernoulli lemniscate. Once the approximation is deemed sufficiently precise, as measured by the distance between observations and estimate, we calculate an interception point to position the interceptor UAV directly on the intruder's path. Our method, which we have significantly improved based on the experience gathered during the MBZIRC competition, has been validated in simulation and through field experiments. Our results confirm that we have developed an efficient, visual-perception module that can extract information describing the intruder UAV's motion with precision sufficient to support interception planning. In a majority of our simulated encounters, we can track and intercept a target that moves 30% faster than the interceptor. Corresponding tests in an unstructured environment yielded 9 out of 12 successful results.
翻译:本文介绍我们在MBZIRC 2020 挑战1的启发下,对入侵者飞行轨迹形状的先验了解,我们可以计算一个截取点。目标追踪以YOLOv3 Tiny convoludial神经网络的图像处理为基础,同时使用由Gimbal 挂载的ZED Mini立体相机进行深度计算。我们使用RGB和摄像头的深度数据,设计一个降低噪音的直方图过滤器以提取目标的3D位置。通过对目标位置的3D测量,用来计算八位图形轨迹的方位、方向和大小,我们用Bernoulli lemnnnistate进行估计。一旦将近似以YOLOv3 Tin continy convolution convolutional 神经网络进行图像处理,同时用观测和估计之间的距离来进行深度计算。我们使用摄像器和深度数据,根据在MBBZIRC竞争期间积累的经验,我们大大改进了方法。在模拟和实地实验中,已经对目标位置进行了3D的测量,我们的结果证实,我们已经发展出一个高效、视觉和模拟模型模型模拟模型模拟模型模型模型模型模型模型模型模型模型模型模型模型的模型的30次测试。