Radar Detection Using Deep Learning

Detta är en Master-uppsats från Lunds universitet/Matematik LTH

Sammanfattning: This thesis aims to reproduce and improve a paper about dynamic road user detection on 2D bird's-eye-view radar point cloud in the context of autonomous driving. We choose RadarScenes, a recent large public dataset, to train and test deep neural networks. We adopt the two best approaches, the image-based object detector with grid mappings approach and the semantic segmentation-based clustering approach. YOLO v3 and Pointnet++ are the deep networks for the two approaches, respectively. We implement an radar-based version of DBSCAN to extract instance clusters (objects). For both approaches, various preprocessing techniques are implemented, such as velocity skew function, upsampling and data augmentations, including rotation and flipping. We also adapt the evaluation metrics, IOU, mAP, and F1-score for point clusters so that both approaches' output can be comparable. The reproduction of both approaches achieved comparable performance as in the original paper, which indicates the image-based detector overwhelmed the semantic segmentation-based clustering approach. We also managed to improve the metrics by adapting clever variations in the DBSCAN pipeline. Besides, we implemented the ablation study for the YOLO approach and found horizontal flipping the point cloud as the optimal data augmentation operation. We implemented the ablation study for the PointNet/DBSCAN pipeline as well and found that randomly jittering the points considering the radial velocity of the radar reflections output the best model, and in under specific cases, it improved it. We also investigated the effect of time accumulation on APs of all the classes. We found that low AP of the pedestrian class is the performance bottleneck, and simply accumulating a longer period cannot significantly improve it.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)