Visual SLAM using sparse maps based on feature points

Detta är en Master-uppsats från Högskolan i Halmstad/Halmstad Embedded and Intelligent Systems Research (EIS)

Sammanfattning: Visual Simultaneous Localisation And Mapping is a useful tool forcreating 3D environments with feature points. These visual systemscould be very valuable in autonomous vehicles to improve the localisation.Cameras being a fairly cheap sensor with the capabilityto gather a large amount of data. More efficient algorithms are stillneeded to better interpret the most valuable information. This paperanalyses how much a feature based map can be reduced without losingsignificant accuracy during localising. Semantic segmentation created by a deep neural network is used toclassify the features used to create the map, the map is reduced by removingcertain classes. The results show that feature based maps cansignificantly be reduced without losing accuracy. The use of classesresulted in promising results, large amounts of feature were removedbut the system could still localise accurately. Removing some classesgave the same results or even better in certain weather conditionscompared to localisation with a full-scale map.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)