Virtual Staining of Blood Cells using Point Light Source Illumination and Deep Learning

Detta är en Master-uppsats från Lunds universitet/Matematik LTH

Sammanfattning: Blood tests are an important part of modern medicine, and are essentially always stained using chemical colorization methods before analysis by computational or manual methods. The staining process allows different parts of blood cells to be discerned that would be unnoticeable in unstained blood. However, this process is complicated to perform correctly and often takes much time. It often requires a trained professional to perform, and the method may damage or alter cells during the staining process. There is also no globally defined standard for staining, which means different labs use different methods and professionals are trained using different stainings. In this thesis, I investigate the possibility of avoiding the chemical staining process by digitally transforming an image of an unstained blood cell to its stained version using deep neural networks and point light source illumination. This problem is thought to be ill-posed, as there is much more information in an image of a stained cell than an unstained one. To counteract this and provide additional information to the neural networks a programmable LED-array is employed as the lighting device in a digital microscope, and each image of an unstained blood cell is accompanied by several extra images taken while lit by one LED at a time. This creates an input with considerably more information than if only traditional lighting had been used, and is key to the performance of the presented models. Virtual staining presents several advantages over chemical staining, as it 1) avoids potentially damaging cells through an invasive method, 2) saves time and resources, 3) allows concurrent staining of different cells using different virtual stains. A complete dataset of 29999 white blood cells was designed and collected during the process of this thesis. As far as I know, this is the first time a setup of this type is used for the purpose of virtual staining. The neural networks investigated consist of architectures inspired by ESRGAN, using Residual-in-Residual blocks in combination with different loss functions ranging from pixel-wise losses to perceptual losses. Furthermore, the use of a generative adversarial network, or GAN, is also investigated to evaluate whether it improves performance. The results include models with four different loss functions, with one of them being a GAN. Variations of these models are also tested with different model depths and dataset size. The presented methods show promising results, and provide a strong proof-of-concept for the proposed solution to virtual staining. The results are however too unreliable and the networks are too slow to implement in a system in their current state.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)