Regularization Methods in Neural Networks

Detta är en Kandidat-uppsats från Uppsala universitet/Statistiska institutionen

Sammanfattning: Overfitting is a common problem in neural networks. This report uses a simple neural network to do simulations relevant for the field of image recognition. In this report, four common regularization methods for dealing with overfitting are evaluated. The methods L1, L2, Early stopping and Dropout are first tested with the MNIST data set and then with the CIFAR-10 data set. All methods are compared to a baseline where no regularization is used at sample sizes ranging from 500 to 50 000 images. The simulations in the report show that all four methods have repetitive patterns throughout the study and that Dropout continuously is superior to the other three methods as well as the baseline.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)