Neural Network-based Anomaly Detection Models and Interpretability Methods for Multivariate Time Series Data

Detta är en Master-uppsats från Stockholms universitet/Institutionen för data- och systemvetenskap

Sammanfattning: Anomaly detection plays a crucial role in various domains, such as transportation, cybersecurity, and industrial monitoring, where the timely identification of unusual patterns or outliers is of utmost importance. Traditional statistical techniques have limitations in handling complex and highdimensional data, which motivates the use of deep learning approaches. The project proposes designing and implementing deep neural networks, tailored explicitly for time series multivariate data from sensors incorporated in vehicles, to effectively capture intricate temporal dependencies and interactions among variables. As this project is conducted in collaboration with Scania, Sweden, the models are trained on datasets encompassing various vehicle sensor data. Different deep learning architectures, including Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs), are explored and compared to identify the most suitable model for anomaly detection tasks for the specified time series data and CNN found to perform well for the data used in the study. Furthermore, interpretability techniques are incorporated into the developed models to enhance their transparency and provide insights into the reasons behind detected anomalies. Interpretability is crucial in real-world applications to facilitate trust, understanding, and decision-making. Both model-agnostic and model-specific interpretability methods were employed to highlight the relevant features and contribute to the interpretability of the anomaly detection models. The performance of the proposed models is evaluated using test datasets with anomaly data, and comparisons are made against existing anomaly detection methods to demonstrate their effectiveness. Evaluation metrics such as precision, recall, false positive rate, F1 score, and composite F1 score are employed to assess the anomaly detection models' detection accuracy and robustness. For evaluating the interpretability method, Kolmogorov-Smirnov Test is used on counterfactual examples. The outcomes of this research project will contribute to developing advanced anomaly detection techniques that can effectively analyse time series multivariate data collected from sensors incorporated in vehicles. Incorporating interpretability techniques will provide valuable insights into the detected anomalies, enabling better decision-making and improved trust in the deployed models. These advancements can potentially enhance anomaly detection systems across various domains, leading to more reliable and secure operations.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)