Edge Machine Learning for Wildlife Conservation : A part of the Ngulia project

Detta är en Master-uppsats från Linköpings universitet/Reglerteknik

Sammanfattning: The prominence of Edge Machine Learning is increasing swiftly as the performance of microcontrollers continues to improve. By deploying object detection and classification models on edge devices with camera sensors, it becomes possible to locate and identify objects in their vicinity. This technology finds valuable applications in wildlife conservation, particularly in camera traps used in African sanctuaries, and specifically in the Ngulia sanctuary, to monitor endangered species and provide early warnings for potential intruders. When an animal crosses the path of a an edge device equipped with a camera sensor, an image is captured, and the animal's presence and identity are subsequently determined. The performance of three distinct object detection models: SSD MobileNetV2, FOMO MobileNetV2, and YOLOv5 is evaluated. Furthermore, the compatibility of these models with three different microcontrollers ESP32 TimerCam from M5Stack, Sony Spresence, and LILYGO T-Camera S3 ESP32-S is explored. The deployment of Over-The-Air updates to edge devices stationed in remote areas is presented. It illustrates how an edge device, initially deployed with a model, can collect field data and be iteratively updated using an active learning pipeline. This project evaluates the performance of three different microcontrollers in conjunction with their respective camera sensors. A contribution of this work is a successful field deployment of a LILYGO T-Camera S3 ESP32-S running the FOMO MobileNetV2 model. The data captured by this setup fuels an active learning pipeline that can be iteratively retrain the FOMO MobileNetV2 model and update the LILYGO T-Camera S3 ESP32-S with new firmware through Over-The-Air updates.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)