Generating an Interpretable Ranking Model: Exploring the Power of Local Model-Agnostic Interpretability for Ranking Analysis

Detta är en Magister-uppsats från Stockholms universitet/Institutionen för data- och systemvetenskap

Sammanfattning: Machine learning has revolutionized recommendation systems by employing ranking models for personalized item suggestions. However, the complexity of learning-to-rank (LTR) models poses challenges in understanding the underlying reasons contributing to the ranking outcomes. This lack of transparency raises concerns about potential errors, biases, and ethical implications. To address these issues, interpretable LTR models have emerged as a solution. Currently, the state-of-the-art for interpretable LTR models is led by generalized additive models (GAMs). However, ranking GAMs face limitations in terms of computational intensity and handling high-dimensional data. To overcome these drawbacks, post-hoc methods, including local interpretable modelagnostic explanations (LIME), have been proposed as potential alternatives. Nevertheless, a quantitative evaluation comparing post-hoc methods efficacy to state-of-the-art ranking GAMs remains largely unexplored. This study aims to investigate the capabilities and limitations of LIME in an attempt to approximate a complex ranking model using a surrogate model. The proposed methodology for this study is an experimental approach. The neural ranking GAM, trained on two benchmark information retrieval datasets, serves as the ground truth for evaluating LIME’s performance. The study adapts LIME in the context of ranking by translating the problem into a classification task and asses three different sampling strategies against the prevalence of imbalanced data and their influence on the correctness of LIME’s explanations. The findings of this study contribute to understanding the limitations of LIME in the context of ranking. It analyzes the low similarity between the explanations of LIME and those generated by the ranking model, highlighting the need to develop more robust sampling strategies specific to ranking. Additionally, the study emphasizes the importance of developing appropriate evaluation metrics for assessing the quality of explanations in ranking tasks.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)