Are Distributed Representations in Neural Networks More Robust Against Malicious Fooling Attacks

Detta är en Magister-uppsats från Högskolan Dalarna/Institutionen för information och teknik

Sammanfattning: A plethora of data from sources like IoT, social websites, health, business, and many more have revolutionized the digital world in recent years. To make effective use of data for any sort of analysis, prediction, or automation of applications, the demand for machine learning and artificial intelligence has grown over time. With the growing capability of neural networks, they are now used in real-time applications related to medical diagnosis, weather forecasting, speech and facial recognition, stock markets, etc. Despite the undoubted processing and intelligence capabilities of neural networks, there are key challenges that are to be addressed for the effective implementation of neural networks in real-time applications. One of these challenges is their vulnerability due to fooling – that is, making networks classify wrongly by inducing very small changes in their inputs. How information is distributed in the network, might be a predictor for fooling, so the role of information distribution on fooling robustness is investigated here. Specifically, we use dropout,a known regularization technique to induce more distributed representations, and test network robustness to fooling induced by the Fast Gradient Sign Method (FGSM). The research findings showed that information smearedness is a better predictor against robustness to fooling as compared to dropout.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)