Explaining Mortality Prediction With Logistic Regression

Detta är en Kandidat-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: Explainability is a key component in building trust for computer calculated predictions when they are applied to areas with influence over individual people. This bachelor thesis project report focuses on the explanation regarding the decision making process of the machine learning method Logistic Regression when predicting mortality. The aim is to present theoretical information about the predictive model as well as an explainable interpretation when applied on the clinical MIMIC-III database. The project found that there was a significant difference between particular features considering the impact of each individual feature on the classification. The feature that showed the greatest impact was the Glasgow Coma Scale value, which could be proven through the fact that a good classifier could be constructed with only that and one other feature. An important conclusion from this study is that a great focus should be enforced early in the implementation process when the features are selected. In this specific case, when medical artificial intelligence is implemented, medical expertise is desired in order to make a good feature selection.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)