Differential privacy and machine learning: Calculating sensitivity with generated data sets

Detta är en M1-uppsats från KTH/Data- och elektroteknik

Sammanfattning: Privacy has never been more important to maintain in today’s information society. Companies and organizations collect large amounts of data about their users. This information is considered to be valuable due to its statistical usage that provide insight into certain areas such as medicine, economics, or behavioural patterns among individuals. A technique called differential privacy has been developed to ensure that the privacy of individuals are maintained. This enables the ability to create useful statistics while the privacy of the individual is maintained. However the disadvantage of differential privacy is the magnitude of the randomized noise applied to the data in order to hide the individual. This research examined whether it is possible to improve the usability of the privatized result by using machine learning to generate a data set that the noise can be based on. The purpose of the generated data set is to provide a local representation of the underlying data set that is safe to use when calculating the magnitude of the randomized noise. The results of this research has determined that this approach is currently not a feasible solution, but demonstrates possible ways to base further research in order to improve the usability of differential privacy. The research indicates limiting the noise to a lower bound calculated from the underlying data set might be enough to reach all privacy requirements. Furthermore, the accuracy of the machining learning algorithm and its impact on the usability of the noise, was not fully investigated and could be of interest in future studies. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)