Robust Multi-Modal Fusion for 3D Object Detection : Using multiple sensors of different types to robustly detect, classify, and position objects in three dimensions.

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: The computer vision task of 3D object detection is fundamentally necessary for autonomous driving perception systems. These vehicles typically feature a multitude of sensors, such as cameras, radars, and light detection and ranging sensors. A neural network architecture approach to make use of these sensor modalities is a multi-modal 3D object detection network with a fusion step that combines the information from multiple data streams to jointly predicted bounding boxes of detected objects. How this step should be performed, however, remains largely an open question due to the contemporary nature of this literature space. Thus, the question arises: How can sensor information from different sensors be combined to perform 3D object detection for a real-world application such as a mobile delivery robot with robustness requirements and how should a fusion step be performed as a part of a larger multi-modal fusion network? This work explores state-of-the-art multi-modal fusion models by testing with sub-optimal sensor data augmentations to quantify robustness including LiDAR point cloud subsampling and low-resolution LiDAR data. Sensor-to-sensor misalignments from poor calibration, decalibration, or spatial-temporal mis-synchronization problems are also simulated and a set of fusion steps are compared and evaluated. Three novel fusion steps are proposed where the best-performing fusion step is a convolution fusion with an encode-decoder and a squeeze and excitation block. The results indicate how early and late fusion methods are sensitive to sub-optimal LiDAR sensor conditions, and thus not suitable for an application with requirements of robust detection. Instead, Deep-fusion based models are preferred. Furthermore, a bird’s eye fusion model is demonstrated to not be overly sensitive to small sensor-to-sensor misalignments, and how the proposed fusion step with an encoder-decoder structure and a squeeze and excitation block can further limit misalignment-related performance deficits. The introduction of sensor misalignment as a training augmentation is also proven to alleviate and generalize the fusion step under heavy misalignment.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)