Sökning: "embedding models"

Visar resultat 21 - 25 av 106 uppsatser innehållade orden embedding models.

  1. 21. Towards topology-aware Variational Auto-Encoders : from InvMap-VAE to Witness Simplicial VAE

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Aniss Aiman Medbouhi; [2022]
    Nyckelord :Variational Auto-Encoder; Nonlinear dimensionality reduction; Generative model; Inverse projection; Computational topology; Algorithmic topology; Topological Data Analysis; Data visualisation; Unsupervised representation learning; Topological machine learning; Betti number; Simplicial complex; Witness complex; Simplicial map; Simplicial regularization.; Variations autokodare; Ickelinjär dimensionalitetsreducering; Generativ modell; Invers projektion; Beräkningstopologi; Algoritmisk topologi; Topologisk Data Analys; Datavisualisering; Oövervakat representationsinlärning; Topologisk maskininlärning; Betti-nummer; Simplicielt komplex; Vittneskomplex; Simpliciel avbildning; Simpliciel regularisering.;

    Sammanfattning : Variational Auto-Encoders (VAEs) are one of the most famous deep generative models. After showing that standard VAEs may not preserve the topology, that is the shape of the data, between the input and the latent space, we tried to modify them so that the topology is preserved. LÄS MER

  2. 22. Attribute Embedding for Variational Auto-Encoders : Regularization derived from triplet loss

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Anton E. L. Dahlin; [2022]
    Nyckelord :Variational Auto-Encoder; Triplet Loss; Contrastive Loss; Generative Models; Metric Learning; Latent Space; Attribute Manipulation; Variationsautokodare; Triplettförlust; Kontrastiv Förlust; Generativa Modeller; Metrisk Inlärning; Latent Utrymme; Attributmanipulation;

    Sammanfattning : Techniques for imposing a structure on the latent space of neural networks have seen much development in recent years. Clustering techniques used for classification have been used to great success, and with this work we hope to bridge the gap between contrastive losses and Generative models. LÄS MER

  3. 23. Evaluation of the performance of machine learning techniques for email classification

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Isabella Tapper; [2022]
    Nyckelord :Natural Language Processing; Text Representations; Email Classification; Text Classification; Behandling Av Naturliga Språk; Text Representation; epost-klassificering; Textklassificering;

    Sammanfattning : Manual categorization of a mail inbox can often become time-consuming. Therefore many attempts have been made to use machine learning for this task. One essential Natural Language Processing (NLP) task is text classification, which is a big challenge since an NLP engine is not a native speaker of any human language. LÄS MER

  4. 24. Sentence Embeddings and Automatic Classification of Menu Items

    Kandidat-uppsats, Uppsala universitet/Institutionen för informationsteknologi

    Författare :Aljaz Kovac; [2022]
    Nyckelord :;

    Sammanfattning : Caspeco AB is a company in Uppsala that specializes in providing IT solutions to the hospitality industry. Their customers (restaurants, pubs, etc.) classify their menu items freely, which leads to a classification that is often inconsistent and unreliable. LÄS MER

  5. 25. Will Svenska Akademiens Ordlista Improve Swedish Word Embeddings?

    Master-uppsats, Uppsala universitet/Statistiska institutionen

    Författare :Ellen Ahlberg; [2022]
    Nyckelord :word embedding; natural language processing; NLP;

    Sammanfattning : Unsupervised word embedding methods are frequently used for natural language processing applications. However, the unsupervised methods overlook known lexical relations that can be of value to capture accurate semantic word relations. This thesis aims to explore if Swedish word embeddings can benefit from prior known linguistic information. LÄS MER