Privacy leaks from deep linear networks : Information leak via shared gradients in federated learning systems

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: The field of Artificial Intelligence (AI) has always faced two major challenges. The first is that data is kept scattered and cannot be collected for more efficiently use. The second is that data privacy and security need to be continuously strengthened. Based on these two points, federated learning is proposed as an emerging machine learning scheme. The idea of federated learning is to collaboratively train neural networks on servers. Each user receives the current weights of the network and then sequentially sends parameter updates (gradients) based on their own data. Because the input data remains on-device and only the parameter gradients are shared, this scheme is considered to be effective in preserving data privacy. Some previous attacks also provide a false sense of security since they only succeed in contrived settings, even for a single image. Our research mainly focus on attacks on shared gradients, showing experimentally that private training data can be obtained from publicly shared gradients. We do experiments on both linear-based and convolutional-based deep networks, whose results show that our attack is capable of creating a threat to data privacy, and this threat is independent of the specific structure of neural networks. The method presented in this paper is only to illustrate that it is feasible to recover user data from shared gradients, and cannot be used as an attack to obtain privacy in large quantities. The goal is to spark further research on federated learning, especially gradient security. We also make some brief discussion on possible strategies against our attack methods of privacy. Different methods have their own advantages and disadvantages in terms of privacy protection. Therefore, data pre-processing and network structure adjustment may need to be further researched, so that the process of training the models can achieve better privacy protection while maintaining high precision.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)