Känsloigenkänning i form av ansiktsuttryck med Kinect

Detta är en Kandidat-uppsats från KTH/Skolan för datavetenskap och kommunikation (CSC)

Författare: Simon Geries; Julius Bladh; [2014]

Nyckelord: ;

Sammanfattning: Facial expressions are a part of our body language that helps us to clarify the verbal communication between humans. We use our facial expressions every day, both consciously and unconsciously, to express emotions and attitudes depending on the situation. The hypothesis of the study is: Given a facial expression, how well can Microsoft Kinect, as an input method, determine a person’s feelings with the two algorithms Naive Bayes and Sequential Minimal Optimization? The feelings are limited to happy, sad, surprised and disgusted. With the help of Kinect, a person’s facial data, both coordinates of the face and parameterized data, were saved and used for machine learning. Two field studies were conducted, where 30 respectively 31 people attended and were instructed to simulate every facial expression mentioned above. In the first field study, coordinates were saved of specific parts of the face, while in the other field study; parameterized data of the face was saved. All data was sent into the machine learning software Weka and was processed with the two algorithms Naive Bayes and Sequential Minimal Optimazation. The best result was given by field study 2, were both Naive Bayes algorithm and Sequential Minimal Optimization gave a success rate of 56,45%. The conclusion of this report is that there is evidence supporting the hypothesis that Kinect, with the help of both algorithms, do recognize facial expressions to a certain extent. However, for the study outcome to be successful a larger sample of data is required for the learning process than what is used in this reports survey.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)