Latent Space Growing of Generative Adversarial Networks

Detta är en Master-uppsats från Lunds universitet/Matematik LTH

Sammanfattning: This thesis presents a system, which builds on the Generative Adversarial Network (GAN) framework, with the focus of learning interpretable representations of data. The system is able to learn representations of data that are ordered in regards to the saliency of the attributes, in a completely unsupervised manner. The training strategy expands the latent space dimension while appropriately adding capacity to the model in a controlled way. This builds on the intuition that highly salient attributes are easiest to learn first. Empirical results on the Swiss roll dataset show that the representation is structured in regards to the saliency of the attributes when training the latent space progressively on a very simple GAN architecture. Experiments using a more complex system, trained on the CelebA dataset, scales the idea to a more interesting use case. Experiments using latent space interpolations show that our model successfully structures the latent space with respect to the saliency of the attributes, while also generating at least as real looking images and in less training time, than state-of-the-art methods.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)