Introducing Sparsity into the Current Landscape of Disentangled Representation Learning

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Elias Ågeby; [2021]

Nyckelord: ;

Sammanfattning: In many scenarios it is natural to assume that a set of data is generated given a set of latent factors. If we consider some high-dimensional data, there might only be a few degrees of variability which are essential to the generation of such data. These degrees of variability are not always directly interpretable, but are still often highly descriptive. The desideratum of disentangled representation learning is to learn a representation which aligns with such latent factors. A representation that is disentangled will present optimal, task-agnostic properties and hence will be useful for a wide variety of downstream tasks. In this work we survey the current state of disentangled representation learning. We review recent advances within the field by discussing the definition, comparing state-of-the-art methods, and contrasting quantitative metrics. Further, we present the β-SVAE, which by modifying the prior distribution of a Variational Autoencoder successfully imposes a sparsity constraint on disentangled representation learning. The β-SVAE achieves higher sparsity than current state-of-the-art methods while remaining disentangled. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)