Deblurring of Cell Images Using Generative Adversarial Networks

Detta är en Master-uppsats från Lunds universitet/Matematik LTH

Sammanfattning: The digital microscopes that CellaVision produces today use a mechanical auto focus when capturing cell images. This requires the camera objective to be vertically positioned with a precision of 0.4 µm in order for the objects in the images to be considered properly focused. Because of the small distances, the system is sensitive to vibrations and errors due to mechanical imperfections are hard to avoid. In order to replace the mechanical auto focus in the system, it would be desirable to digitally transform unfocused images to appear focused after they were captured. In this thesis we investigate if it is possible to transform an unfocused cell image to a sharp one using generative adversarial networks (GANs). A data set of 10 786 sharp images were collected together with blurry images captured at different distances from the optimal focus, using CellaVision’s systems. From implementing and comparing three already defined GANs - pix2pix, PAN and deblurGAN - we saw that pix2pix performed best for our problem. Moving on with pix2pix, we added losses and changed the network structure to improve the result. To make the network faster, we also changed the architecture of the generator network. We ended up with three different GANs, whereof one met the time requirement of transforming a 360×360 image in 70 ms on a CPU. Generally, all of the three final networks managed to sharpen all of the blurry images to some extent, but not always to the desired focus. The best of the networks was able to successfully transform around 90% of images captured in the interval −0.8 to 0.8 µm from the optimal focus. The fastest network had the poorest performance of them, but still managed to successfully transform 78% of the blurry images from the same interval.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)