Labelling Motion Capture Markers Using Dynamic Graph Convolutional Neural Networks

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Jacob Stuart; [2022]

Nyckelord: ;

Sammanfattning: This thesis concerns labelling unlabelled motion capture (mocap) data using a Dynamic Graph Convolutional Neural Network (DGCNN) [46]. The most common type of motion-capture system, i.e. passive motion-capture, records the 3D position of multiple reflective markers using multiple infrared cameras with overlapping fields of view. To make the recorded points useful, each point must be assigned a label uniquely identifying what it was attached to in the real world. For human subjects, the correspondence between recorded markers and the human body is usually established by having the subject perform a calibration pose at the beginning of the recording. This thesis investigates if this labelling process can be performed using a DGCNN originally devised for general point cloud segmentation and recognition. In order to evaluate this, a DGCNN was implemented and trained on synthetically generated mocap data. When applied to non-synthetic mocap data released as part of the state-of-the-art transformer-based SOMA [15] auto labelling system, the DGCNN could correctly label 99% of all recorded points, only slightly below the 100% state of the art performance on the same dataset.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)