Human Activity Recognition (HAR) is an ongoing research topic. It has applications in medical support, sports, fitness, social networking, human-computer interfaces, senior care, entertainment, surveillance, and the list goes on. Traditionally, computer vision methods were employed for HAR, which has numerous problems such as secrecy or privacy, the influence of environmental factors, less mobility, higher running costs, occlusion, and so on. A new trend in the use of sensors, especially inertial sensors, has lately emerged. There are several advantages of employing sensor data as an alternative to traditional computer vision algorithms. Many of the limitations of computer vision algorithms have been documented in the literature, including research on Deep Neural Network (DNN) and Machine Learning (ML) approaches for activity categorization utilizing sensor data. We examined and analyzed different Machine Learning and Deep Learning approaches for Human Activity Recognition using inertial sensor data of smartphones. In order to identify which approach is best suited for this application.
翻译:人类活动认识(HAR)是一个持续的研究课题,在医疗支助、体育、健身、社交网络、人-计算机界面、老年人护理、娱乐、监视和清单方面都有应用。传统上,HAR采用计算机视觉方法,它有许多问题,例如保密或隐私、环境因素的影响、流动性低、运行成本高、隔离等等。最近出现了使用传感器,特别是惯性传感器的新趋势。使用传感器数据替代传统的计算机视觉算法有若干好处。在文献中记载了计算机视觉算法的许多局限性,包括对利用感官数据进行活动分类的深神经网络和机器学习方法的研究。我们用智能手机的惯性传感器数据研究和分析了不同的机器学习和深学习方法,以确认人类的活动。为了确定哪种方法最适合这一应用。