Machine Learning Algorithm for Classification of Breast Ultrasound Images

Detta är en Master-uppsats från Lunds universitet/Matematik LTH

Författare: Jennie Karlsson; Jennifer Ramkull; [2021]

Nyckelord: Technology and Engineering;

Sammanfattning: Breast cancer is the most common type of cancer worldwide and the incidence is increasing. Women in low- and middle-income countries have a high mortality to incidence ratio mainly due to a lack of resources and organized health-care. A cheap and reliable breast diagnostic tool could enable an earlier diagnosis for low-resource countries and contribute to a reduction in breast cancer mortality. We suggest that a point-of-care ultrasound device combined with machine learning (ML) could be a viable solution for accessible breast diagnostics. The aim of the thesis was to develop an ML algorithm by using three different convolutional neural networks (CNN) approaches with the goal of classifying breast ultrasound images as malignant or benign: (a) A simple CNN, (b) transfer learning using the pre-trained convolutional bases InceptionV3, ResNet50V2, VGG19 and Xception and (c) eleven networks based on combinations of the four transfer networks in (b) so-called deep feature networks. The data consisted of two breast ultrasound image data sets: (1) An Egyptian data set collected by Cairo University at Baheya Hospital consisting of 487 benign images and 210 malignant images. This data set was divided into 80% training, 10% validation and 10% test. (2) A Swedish data set collected at Unilabs Mammography Unit at Skåne University Hospital consisting of 13 benign images and 264 malignant images. This data set was only used to evaluate the models. All networks were evaluated using AUC, sensitivity, specificity and weighted accuracy. For the networks obtained from transfer learning Gradient-weighted Class Activation Mapping (Grad-CAM) was performed to generate heatmaps indicating which part of the image contributing to the decision of the network. The best result was achieved by the deep feature combination of InceptionV3, Xception and VGG19 with an AUC of 0.93 and sensitivity of 95.65%. The results show that the deep feature combinations have the best performance. Possible future improvements include expanding the data set by collecting more images.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)