Knowledge distillation for anomaly detection

Detta är en Master-uppsats från Uppsala universitet/Institutionen för informationsteknologi

Författare: Nils Gustav Erik Pettersson; [2023]

Nyckelord: ;

Sammanfattning: The implementation of systems and methodologies for time series anomaly detection holds the potential of providing timely detection of faults and issues in a wide variety of technical systems. Ideally, these systems are able to identify deviations from the normal behavior of systems even before any problems manifest, thus enabling proactive maintenance. However, the task of identifying anomalous patterns is often difficult. In recent years, deep learning models have exhibited potential as valuable instruments within this domain. Nonetheless, their effectiveness often demands substantial computational resources. This situation introduces complexities when attempting to implement these models on edge devices, like vehicles, which possess constrained computational capabilities. Within this project, the focus is on exploring the utility of knowledge distillation (KD) for compressing variational autoencoders (VAEs), a distinct class of deep learning models utilized for anomaly detection. The aim is to achieve compression without imposing substantial performance drawbacks. Through experimentation on both publicly available datasets and real-world empirical vehicle data, we demonstrate the effectiveness of KD, achieving a compression ratio of four without losing model performance. Additionally, we delve into the essential conditions for successful KD application. The insights derived from this research have relevance for industries reliant on edge device anomaly detection, including domains like transportation, telecommunications, and healthcare sectors.  

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)