Sökning: "L2-regularization"
Visar resultat 1 - 5 av 9 uppsatser innehållade ordet L2-regularization.
1. Elastic Net Regression for Prosthesis Control in Short Residual Limb Amputees: Performance and Generalizability
Master-uppsats, Lunds universitet/Avdelningen för Biomedicinsk teknikSammanfattning : This Master's thesis in Biomedical Engineering investigates the performance and generalizability of linear regression models in context of prosthesis control for short residual limb amputees. This thesis uses intramuscular electromyography data, and a regression and emplys a regression technique called Elastic Net Regression - a technique that combines L1 and L2-regularization - to predict 1-DOF isometric forces outputs from fingers and the wrist. LÄS MER
2. Investigation of Facial Age Estimation using Deep Learning
Master-uppsats, Uppsala universitet/Institutionen för informationsteknologiSammanfattning : Age estimation from facial images has drawn increasing attention in the past fewyears. This thesis project performs the age group classification of facial imagesacquired in in-the-wild conditions using deep convolutional neural networkstechniques. LÄS MER
3. Investigating Relations between Regularization and Weight Initialization in Artificial Neural Networks
Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisationSammanfattning : L2 regularization is a common method used to prevent overtraining in artificial neural networks. However, an issue with this method is that the regularization strength has to be properly adjusted for it to work as intended. This value is usually found by trial and error which can take some time, especially for larger networks. LÄS MER
4. Optimizing L2-regularization for Binary Classification Tasks
Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisationSammanfattning : An Artificial Neural Network (ANN) is a type of machine learning algorithm with widespread usage. When training an ANN, there is a risk that it gets overtrained and cannot solve the task for new data. Methods to prevent this, such as L2-regularization, introduce hyperparameters that are time-consuming to optimize. LÄS MER
5. Prediction of appropriate L2 regularization strengths through Bayesian formalism
Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation; Lunds universitet/Institutionen för astronomi och teoretisk fysik - Genomgår omorganisationSammanfattning : This paper proposes and investigates a Bayesian relation between optimal L2 regularization strengths and the number of training patterns and hidden nodes used for an artificial neural network. The results support the proposed dependence for number of training patterns, while the dependence on hidden architecture was less clear. LÄS MER