Generating Synthetic Training Data with Stable Diffusion

Detta är en Master-uppsats från Linköpings universitet/Institutionen för datavetenskap

Sammanfattning: The usage of image classification in various industries has grown significantly in recentyears. There are however challenges concerning the data used to train such models. Inmany cases the data used in training is often difficult and expensive to obtain. Furthermore,dealing with image data may come with additional problems such as privacy concerns. Inrecent years, synthetic image generation models such as Stable Diffusion has seen signifi-cant improvement. Solely using a textual description, Stable Diffusion is able to generate awide variety of photorealistic images. In addition to textual descriptions, other condition-ing models such as ControlNet has enabled the possibility of additional grounding infor-mation, such as canny edge and segmentation images. This thesis investigates if syntheticimages generated by Stable Diffusion can be used effectively in training an image classifier.To find the most effective method for generating training data, multiple conditioning meth-ods are investigated and evaluated. The results show that it is possible to generate high-quality training data using several conditioning techniques. The best performing methodwas using canny edge grounded images to augment already existing data. Extending twoclasses with additional synthetic data generated by the best performing method, achievedthe highest average F1-score increase of 0.85 percentage points compared with a baselinesolely trained on real images.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)