Deep Reinforcement Learning for Complete Coverage Path Planning in Unknown Environments

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Omar Boufous; [2020]

Nyckelord: ;

Sammanfattning: Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve this objective, a robot must be able to correctly per- ceive its environment, plan its path, and move around safely, without human su- pervision. Navigation from an initial position to a target location has been a chal- lenging problem in robotics. This work examined the particular navigation task requiring complete coverage planning in outdoor environments. A motion plan- ner based on Deep Reinforcement Learning is proposed where a Deep Q-network is trained to learn a control policy to approximate the optimal strategy, using a dynamic map of the environment. In addition to this path planning algorithm, a computer vision system is presented as a way to capture the images of a stereo camera embedded on the robot, detect obstacles and update the workspace map. Simulation results show that the algorithm generalizes well to different types of environments. After multiple sequences of training of the Reinforcement Learn- ing agent, the virtual mobile robot is able to cover the whole space with a coverage rate of over 80% on average, starting from a varying initial position, while avoid- ing obstacles by using relying on local sensory information. The experiments also demonstrate that the DQN agent was able to better perform the coverage when compared to a human.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)