The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.
翻译:拟议的购物助理模型SANIP将帮助盲人探测手持物件,并获得从被探测到和被确认的物件中提取的信息的视频反馈,拟议模型包括三个双眼模型,即自定义对象探测、文本探测和条形码探测。关于手持物件的物体探测,我们建立了我们自己的定制数据集,其中包括诸如Parle-G、Tide和Lays等日常货物。除了我们还收集了墨盒和出口标志的图像,因为任何人在紧急情况下必须使用手推车并注意退出标志。其他2个模型建议将所检索的文本和条形码信息从文本转换为语言,并转发给盲人。该模型用来检测经过训练并成功地以精确和精确的方式检测和确认预期产出的物体。