Sökning: "Auto-Encoder"

Visar resultat 1 - 5 av 25 uppsatser innehållade ordet Auto-Encoder.

  1. 1. Anomaly Detection of Time Series Caused by International Revenue Share Fraud : Additive Model and Autoencoder Applications

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Lingxiao Wang; [2023]
    Nyckelord :Fraud detection; Anomaly detection; Machine learning; Bedrägeriupptäckt; Anomalidetektering; Maskininlärning;

    Sammanfattning : In this paper, we compare the performance of two methods to find the attempts at fraud from the data provided by Sinch (formerly CLX Communications, which is a telecommunications and cloud communications platform as a service (PaaS) company). We consider the problem as finding the anomaly in a time series signal, where we ignore the duration of a single call or other features and only care about the total volume of calls in a certain period. LÄS MER

  2. 2. Prediction and Analysis of 5G beyond Radio Access Network

    Master-uppsats, Uppsala universitet/Institutionen för informationsteknologi

    Författare :Gaurav Singh; Shreyansh Singh; [2023]
    Nyckelord :- LSTM; RNN; AE-LSTM; Deep Learning; Machine Learning; network traffic flow; forecasting; quality-of-service.;

    Sammanfattning : Network traffic forecasting estimates future network traffic based on historical traffic observations. It has a wide range of applications, and substantial attention has been dedicated to this research area. LÄS MER

  3. 3. Towards topology-aware Variational Auto-Encoders : from InvMap-VAE to Witness Simplicial VAE

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Aniss Aiman Medbouhi; [2022]
    Nyckelord :Variational Auto-Encoder; Nonlinear dimensionality reduction; Generative model; Inverse projection; Computational topology; Algorithmic topology; Topological Data Analysis; Data visualisation; Unsupervised representation learning; Topological machine learning; Betti number; Simplicial complex; Witness complex; Simplicial map; Simplicial regularization.; Variations autokodare; Ickelinjär dimensionalitetsreducering; Generativ modell; Invers projektion; Beräkningstopologi; Algoritmisk topologi; Topologisk Data Analys; Datavisualisering; Oövervakat representationsinlärning; Topologisk maskininlärning; Betti-nummer; Simplicielt komplex; Vittneskomplex; Simpliciel avbildning; Simpliciel regularisering.;

    Sammanfattning : Variational Auto-Encoders (VAEs) are one of the most famous deep generative models. After showing that standard VAEs may not preserve the topology, that is the shape of the data, between the input and the latent space, we tried to modify them so that the topology is preserved. LÄS MER

  4. 4. VTG-Fusion : A GAN-ViT-Based Infrared and Visible ImageFusion Method

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Geng Jiaqi; [2022]
    Nyckelord :Infrared and visible image fusion; deep learning; Generative adversarial network; Transformer; Infraröd och synlig bildfusion; djupinlärning; GAN; Transformator;

    Sammanfattning : Infrared and visible image fusion targets generating one image with texture details from visible images and highlighted objects from the infrared images. It has been widely used in object recognition and object detection. LÄS MER

  5. 5. Attribute Embedding for Variational Auto-Encoders : Regularization derived from triplet loss

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Anton E. L. Dahlin; [2022]
    Nyckelord :Variational Auto-Encoder; Triplet Loss; Contrastive Loss; Generative Models; Metric Learning; Latent Space; Attribute Manipulation; Variationsautokodare; Triplettförlust; Kontrastiv Förlust; Generativa Modeller; Metrisk Inlärning; Latent Utrymme; Attributmanipulation;

    Sammanfattning : Techniques for imposing a structure on the latent space of neural networks have seen much development in recent years. Clustering techniques used for classification have been used to great success, and with this work we hope to bridge the gap between contrastive losses and Generative models. LÄS MER