The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS, a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS's real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver's seat perspective. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999 and a TPR of 1.0 with an FPR of 0.024.
翻译:检测对象是否为二维或三维天体的能力在自主驾驶中极为重要,因为检测错误可能带来危及生命的后果,危及驾驶者、乘客、行人和路上其他人的安全。建议区分二维和三维天体的方法(例如,生活状态探测方法)不适合自主驾驶,因为它们依赖物体,或没有考虑到与自主驾驶有关的限制(例如,在车辆移动时需要实时决策)。在本文件中,我们介绍了“眼数据采集系统”,这是一种以几发学习为基础的新方法,旨在确保物体探测器(OD)不受立体失明综合症(即无法区分2D和3D天体物体)的威胁。我们利用从7个YouTube录像中提取的2 000个天体的实时性能,从司机座位角度从破碎的摄像头拍摄的街头景象记录中提取了2 000个天体(例如,在车辆移动时需要实时决策);在将EyeDAS应用七种状态的ODAS作为反制措施时,EECDADAS能够将2D misrigistration比率从0.100降至2.D的0.1%。