Sökning: "Adversarial example"

Visar resultat 1 - 5 av 19 uppsatser innehållade orden Adversarial example.

  1. 1. Robust Neural Receiver in Wireless Communication : Defense against Adversarial Attacks

    Master-uppsats, Linköpings universitet/Kommunikationssystem

    Författare :Alice Nicklasson Cedbro; [2023]
    Nyckelord :Wireless communication; Neural receiver; Robust neural receiver; Adversarial machine learning; Fast Gradient Sign Method; Adversarial training;

    Sammanfattning : In the field of wireless communication systems, the interest in machine learning has increased in recent years. Adversarial machine learning includes attack and defense methods on machine learning components. LÄS MER

  2. 2. GAN-Based Counterfactual Explanation on Images

    Magister-uppsats, Stockholms universitet/Institutionen för data- och systemvetenskap

    Författare :Ning Wang; [2023]
    Nyckelord :: Machine Learning; Counterfactual Explanation; GAN; DCGAN;

    Sammanfattning : Machine learning models are widely used in various industries. However, the black-box nature of the model limits users’ understanding and trust in its inner workings, and the interpretability of the model becomes critical. LÄS MER

  3. 3. Image generation through feature extraction and learning using a deep learning approach

    Master-uppsats, Linnéuniversitetet/Institutionen för datavetenskap och medieteknik (DM)

    Författare :Tibo Bruneel; [2023]
    Nyckelord :Deep Learning; Neural Networks; Deep Generative Learning; Variational Autoencoders; Generative Adversarial Networks; Flow-based Models; Triplet Image Generation; Triplet Loss; Tree Log End Generation; Forestry Application;

    Sammanfattning : With recent advancements, image generation has become more and more possible with the introduction of stronger generative artificial intelligence (AI) models. The idea and ability of generating non-existing images that highly resemble real world images is interesting for many use cases. LÄS MER

  4. 4. Analyzing the Negative Log-Likelihood Loss in Generative Modeling

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Aleix Espuña I Fontcuberta; [2022]
    Nyckelord :Generative modeling; Normalizing flows; Generative Adversarial Networks; MaximumLikelihood Estimation; Real Non-Volume Preserving flow; Fréchet Inception Distance; Misspecification; Generativa metoder; Normalizing flows; Generative adversarial networks; Maximum likelihood-metoden; Real non-volume preserving flow; Fréchet inception distance; felspecificerade modeller;

    Sammanfattning : Maximum-Likelihood Estimation (MLE) is a classic model-fitting method from probability theory. However, it has been argued repeatedly that MLE is inappropriate for synthesis applications, since its priorities are at odds with important principles of human perception, and that, e.g. LÄS MER

  5. 5. Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Hsiang-Yu Chu; [2022]
    Nyckelord :Machine learning; Deep neural networks; Loss function; Adversarial example; Adversarial attack; Adversarial training; Maskininlärning; Djupa neurala nätverk; Förlustfunktion; Motståndarexempel; Motståndarattack; Motståndsträning;

    Sammanfattning : Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. LÄS MER