Can we see it all? Do we know it All? These are questions thrown to human beings in our contemporary society to evaluate our tendency to solve problems. Recent studies have explored several models in object detection; however, most have failed to meet the demand for objectiveness and predictive accuracy, especially in developing and under-developed countries. Consequently, several global security threats have necessitated the development of efficient approaches to tackle these issues. This paper proposes an object detection model for cyber-physical systems known as Smart Surveillance Systems (3s). This research proposes a 2-phase approach, highlighting the advantages of YOLO v3 deep learning architecture in real-time and visual object detection. A transfer learning approach was implemented for this research to reduce training time and computing resources. The dataset utilized for training the model is the MS COCO dataset which contains 328,000 annotated image instances. Deep learning techniques such as Pre-processing, Data pipelining, and detection was implemented to improve efficiency. Compared to other novel research models, the proposed model's results performed exceedingly well in detecting WILD objects in surveillance footages. An accuracy of 99.71% was recorded, with an improved mAP of 61.5.
翻译:我们能否全部看到? 我们都知道吗? 这些是我们当代社会的人为了评估我们解决问题的倾向而向人类提出的问题。最近的研究探索了物体探测的几个模型;然而,大多数都未能满足客观和预测准确性的要求,特别是在发展中国家和欠发达国家。因此,若干全球安全威胁使得有必要制定解决这些问题的有效方法。本文件提议了一个被称为智能监视系统(3s)的网络物理系统物体探测模型。本研究提出了一个两阶段方法,强调YOLO v3深层次学习结构在实时和视觉物体探测方面的优势。为这一研究采用了转移学习方法,以减少培训时间和计算资源。用于培训模型的数据集是MS COCO数据集,其中包含328 000个附加图像实例。实施了诸如预处理、数据管道和探测等深层学习技术来提高效率。与其他新型研究模型相比,拟议的模型在探测监视片段中的WILD物体方面效果极佳。记录了99.71%的准确率,其中改进了 mAP61.5。