In this article gesture recognition and speech recognition applications are implemented on embedded systems with Tiny Machine Learning (TinyML). It features 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. The gesture recognition,provides an innovative approach nonverbal communication. It has wide applications in human-computer interaction and sign language. Here in the implementation of hand gesture recognition, TinyML model is trained and deployed from EdgeImpulse framework for hand gesture recognition and based on the hand movements, Arduino Nano 33 BLE device having 6-axis IMU can find out the direction of movement of hand. The Speech is a mode of communication. Speech recognition is a way by which the statements or commands of human speech is understood by the computer which reacts accordingly. The main aim of speech recognition is to achieve communication between man and machine. Here in the implementation of speech recognition, TinyML model is trained and deployed from EdgeImpulse framework for speech recognition and based on the keywords pronounced by human, Arduino Nano 33 BLE device having built-in microphone can make an RGB LED glow like red, green or blue based on keyword pronounced. The results of each application are obtained and listed in the results section and given the analysis upon the results.
翻译:在本条中,姿态识别和语音识别应用在小机器学习(TinyML)的嵌入系统上实施,其特点是3轴加速仪、3轴陀螺仪、3轴陀螺仪和3轴磁强计;姿态识别、提供创新的非语言通信方法;在人-计算机互动和手语方面广泛应用;在实施手势识别方面,TyyML模型从Edge Impulse框架培训和部署,用于手势识别,基于手动运动,Arduino Nano 33 Bleo33 具有6轴IMU的功能可以找到手动方向;发言是一种交流模式;语音识别是计算机理解人类讲话的言论或指令的一种方式,计算机可以据此作出反应;语音识别的主要目的是实现人与机器之间的沟通;在实施语音识别方面,TyMLM模型从Edge Impulse框架培训和部署,基于人类、Arduino 33 Nano Derbleo 的语音识别关键词框架,并基于人类、Arduino 33 LEV 安装的MUMAC设备所创建的节,可以将RGB LELEMLEMLE 的每分红或蓝色分析结果作为红色或蓝色关键数据。