Automatic human pose annotation for loose-fitting clothes

Published in 2019 16th International Conference on Machine Vision Applications (MVA), 2019

Recommended citation: Matsumoto, T., Shimosato, K., Maeda, T., Murakami, T., Murakoso, K., Mino, K., & Ukita, N. (2019, May). Automatic human pose annotation for loose-fitting clothes. In 2019 16th International Conference on Machine Vision Applications (MVA) (pp. 1-6). IEEE.

This paper proposes a framework for automatically annotating the keypoints of a human body in images for learning 2D pose estimation models. While considerable contributions in the community provide us a huge number of pose-annotated images, all of them mainly focus on people wearing common clothes, which are relatively easy to annotate the body keypoints. This paper, on the other hand, focuses on annotating people wearing loose-fitting clothes that occlude many body keypoints. In order to automatically and correctly annotate these people, we divert the 3D coordinates of the keypoints observed without loose-fitting clothes, which can be captured by a motion capture system (MoCap). These 3D keypoints are projected to an image where the body pose under loose-fitting clothes is similar to the one captured by the MoCap. Pose similarity between bodies with and without loose-fitting clothes is evaluated with 3D geometric configurations of MoCap markers that are visible even with loose-fitting clothes. Experimental results validate the effectiveness of our proposed framework for human pose estimation.

Download paper here