In this paper, we address an issue that the visually impaired commonly face while crossing intersections and propose a solution that takes form as a mobile application. The application utilizes a deep learning convolutional neural network model, LytNetV2, to output necessary information that the visually impaired may lack when without human companions or guide-dogs. A prototype of the application runs on iOS devices of versions 11 or above. It is designed for comprehensiveness, concision, accuracy, and computational efficiency through delivering the two most important pieces of information, pedestrian traffic light color and direction, required to cross the road in real-time. Furthermore, it is specifically aimed to support those facing financial burden as the solution takes the form of a free mobile application. Through the modification and utilization of key principles in MobileNetV3 such as depthwise seperable convolutions and squeeze-excite layers, the deep neural network model achieves a classification accuracy of 96% and average angle error of 6.15 degrees, while running at a frame rate of 16.34 frames per second. Additionally, the model is trained as an image classifier, allowing for a faster and more accurate model. The network is able to outperform other methods such as object detection and non-deep learning algorithms in both accuracy and thoroughness. The information is delivered through both auditory signals and vibrations, and it has been tested on seven visually impaired and has received above satisfactory responses.
翻译:在本文中,我们处理视力受损者在跨过交叉点时通常面临的一个问题,并提出一种以移动应用程序形式出现的解决方案。应用程序使用深深学习的神经神经网络模型LytNetV2,以输出视力受损者在没有人类同伴或导盲犬的情况下可能缺乏的必要信息。应用程序的原型在版本11或以上iOS设备上运行。深神经网络模型的设计是全面性、准确性、准确性和计算效率,方法是通过提供实时跨过道路所需的两大部分信息,即行人交通灯色和方向。此外,该模型还特别旨在支持那些面临财务负担的人,因为解决方案以自由移动应用程序的形式出现。通过修改和利用移动网络3中的关键原则,例如深度的内存变异和挤离感层。深神经网络模型的分类准确性为96%,平均角度误差为6.15度,同时以每秒16.34个框架的速度运行。此外,该模型经过培训成为图像分类师,允许快速和更精确地进行移动应用,从而能够进行更快速和准确的测量。通过更精确的模型来测算,网络能够超越其他的准确度测算。