Sketch to 3D Model using Generative Query Networks

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Max Nihlén Ramström; [2019]

Nyckelord: ;

Sammanfattning: For digital artists and animators, translating an idea from a rough sketch to a 3D model is a time consuming process requiring a plethora of different software. In this work, a Generative Model which can directly generate images of 3D models from arbitrary view points by observing sketched 2D images is presented. The model is based on Generative Query Networks and two different generative models were tested for generating new images, the first a Variational Auto Encoder and the second a Generative Adversarial Network. The model learns to produce new images from any queried view point allowing it to perform so called mental rotation of an object as if a 3D model had been generated. A paired dataset containing images of 3D models, the view point from where each image is captured and corresponding sketch versions was created in order to train the model. It was found that the Variational Auto Encoder could create plausible images from as little as a single sketch while the Generative Adversarial Network failed to correctly condition on the given sketches.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)