Image and RADAR fusion for autonomous vehicles

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: Robust detection, localization, and tracking of objects are essential for autonomous driving. Computer vision has largely driven development based on camera sensors in recent years, but 3D localization from images is still challenging. Sensors such as LiDAR or RADAR are used to compute depth; each having its own advantages and drawbacks. The main idea of the project is to be able to mix images from the camera and RADAR detections in order to estimate depths for the objects appearing in the images. Fusion strategies can be considered the solution to give a more detailed description of the environment by utilizing both the 3D localization capabilities of range sensors and the higher spatial resolution of image data. The idea is to fuse 3D detections from the RADAR onto the image plane, this requires a high level of synchronization of the sensors and projections of the RADAR data on the required image.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)