The way organs are positioned and moved in the workplace can cause pain and physical harm. Therefore, ergonomists use ergonomic risk assessments based on visual observation of the workplace, or review pictures and videos taken in the workplace. Sometimes the workers in the photos are not in perfect condition. Some parts of the workers' bodies may not be in the camera's field of view, could be obscured by objects, or by self-occlusion, this is the main problem in 2D human posture recognition. It is difficult to predict the position of body parts when they are not visible in the image, and geometric mathematical methods are not entirely suitable for this purpose. Therefore, we created a dataset with artificial images of a 3D human model, specifically for painful postures, and real human photos from different viewpoints. Each image we captured was based on a predefined joint angle for each 3D model or human model. We created various images, including images where some body parts are not visible. Nevertheless, the joint angle is estimated beforehand, so we could study the case by converting the input images into the sequence of joint connections between predefined body parts and extracting the desired joint angle with a convolutional neural network. In the end, we obtained root mean square error (RMSE) of 12.89 and mean absolute error (MAE) of 4.7 on the test dataset.
翻译:暂无翻译