Explainability of Machine Learning in Forecasts for Retail Sales Campaigns

Detta är en Kandidat-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Alexander Olsson; Robin Zamojsky; [2022]

Nyckelord: ;

Sammanfattning: Explainable artificial intelligence is a growing field which is becoming increasingly important as machine learning is more integrated in businesses and various functions in society. A rising concern is the lack of transparency in decision-making processes for a substantial number of machine learning models. The objective of the study was to evaluate to what extent white-box decision tree models were able to attain explainability of a black-box stacking ensemble model used for demand forecasting within the food retail industry. Two approaches were investigated, namely ante-hoc, which substitutes the black-box, and post-hoc, which complements the underlying black-box with a surrogate. Model performance was measured in MAE and MAPE, whilst Shapley values represented feature importance. Furthermore, the study aimed to define aspects important for facilitating trust in forecasts through interviews with business stakeholders. The results from the ante-hoc and post-hoc investigations showed that their respective best performing white-box model scored unsatisfyingly 84 in MAE and 107% MAPE, and 40 in MAE and 25% MAPE. Additionally, no white-box model successfully replicated the feature importance of the underlying black-box model in the post-hoc approach. In conclusion, all white-box models were deemed inadequate as either substitutes or surrogates to the black-box model. Finally, insights from the interviews found the decision tree model applicable as an explanatory tool for black-box models, and the identified trust aspects were: Communication, language, and technical vocabulary; Congruence between stakeholders; Forecast variables and underlying assumptions. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)