Human Grasp Synthesis with Deep Learning

Detta är en Uppsats för yrkesexamina på avancerad nivå från KTH/Robotik, perception och lärande, RPL

Sammanfattning: The human hands are one of the most complex organs of the human body. As they enable us to grasp various objects in many different ways, they have played a crucial role in the rise of the human race. Being able to control hands as a human do is a key step towards friendly human-robots interaction and realistic virtual human simulations. Grasp generation has been mostly studied for the purpose of generating physically stable grasps. This paper addresses a different aspect: how to generate realistic, natural looking grasps that are similar to human grasps. To simplify the problem, the wrist position is assumed to be known and only the finger pose is generated. As the realism of a grasp is not easy to put into equations, data-driven machine learning techniques are used. This paper investigated the application of the deep neural networks to the grasp generation problems. Two different object shape representations (point cloud and multi-view depth images), and multiple network architectures are experimented, using a collected human grasping dataset in a virtual reality environment. The resulting generated grasps are highly realistic and human-like. Though there are sometimes some finger penetrations on the object surface, the general poses of the fingers around the grasped objects are similar to the collected human data. The good performance extends to the objects of categories previously unseen by the network. This work has validated the efficiency of a data-driven deep learning approach for human-like grasp synthesis. I believe the realistic-looking objective of the grasp synthesis investigated in this thesis can be combined with the existing mechanical, stable grasp criteria to achieve both natural-looking and reliable grasp generations.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)