Sökning: "L2-regularization"

Visar resultat 1 - 5 av 9 uppsatser innehållade ordet L2-regularization.

  1. 1. Elastic Net Regression for Prosthesis Control in Short Residual Limb Amputees: Performance and Generalizability

    Master-uppsats, Lunds universitet/Avdelningen för Biomedicinsk teknik

    Författare :Oskar Berg; [2023]
    Nyckelord :Neuroengineering; Statistics; Biomedical Signal Processing; Technology and Engineering;

    Sammanfattning : This Master's thesis in Biomedical Engineering investigates the performance and generalizability of linear regression models in context of prosthesis control for short residual limb amputees. This thesis uses intramuscular electromyography data, and a regression and emplys a regression technique called Elastic Net Regression - a technique that combines L1 and L2-regularization - to predict 1-DOF isometric forces outputs from fingers and the wrist. LÄS MER

  2. 2. Investigation of Facial Age Estimation using Deep Learning

    Master-uppsats, Uppsala universitet/Institutionen för informationsteknologi

    Författare :Lufei Ye; [2022]
    Nyckelord :;

    Sammanfattning : Age estimation from facial images has drawn increasing attention in the past fewyears. This thesis project performs the age group classification of facial imagesacquired in in-the-wild conditions using deep convolutional neural networkstechniques. LÄS MER

  3. 3. Investigating Relations between Regularization and Weight Initialization in Artificial Neural Networks

    Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

    Författare :Rasmus Sjöö; [2022]
    Nyckelord :Artificial Neural Networks; L1 Regularization; L2 Regularization; Loss Function; Maximum Likelihood; Regularization Strength Synthetic Data Generation; Weight Initialization; Physics and Astronomy;

    Sammanfattning : L2 regularization is a common method used to prevent overtraining in artificial neural networks. However, an issue with this method is that the regularization strength has to be properly adjusted for it to work as intended. This value is usually found by trial and error which can take some time, especially for larger networks. LÄS MER

  4. 4. Optimizing L2-regularization for Binary Classification Tasks

    Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation

    Författare :Oskar Bolinder; [2022]
    Nyckelord :Machine learning; Overtraining; L2-regularization; Model selection; Binary classification; Physics and Astronomy;

    Sammanfattning : An Artificial Neural Network (ANN) is a type of machine learning algorithm with widespread usage. When training an ANN, there is a risk that it gets overtrained and cannot solve the task for new data. Methods to prevent this, such as L2-regularization, introduce hyperparameters that are time-consuming to optimize. LÄS MER

  5. 5. Prediction of appropriate L2 regularization strengths through Bayesian formalism

    Kandidat-uppsats, Lunds universitet/Beräkningsbiologi och biologisk fysik - Genomgår omorganisation; Lunds universitet/Institutionen för astronomi och teoretisk fysik - Genomgår omorganisation

    Författare :Alexander Degener; [2022]
    Nyckelord :Machine learning; Artificial Neural Network; L2 regularization strength; Bayesian formalism; Classification tasks; Physics and Astronomy;

    Sammanfattning : This paper proposes and investigates a Bayesian relation between optimal L2 regularization strengths and the number of training patterns and hidden nodes used for an artificial neural network. The results support the proposed dependence for number of training patterns, while the dependence on hidden architecture was less clear. LÄS MER