Algorithmic decisions in critical domains such as hiring, college admissions, and lending are often based on rankings. Because of the impact these decisions have on individuals, organizations, and population groups, there is a need to understand them: to help individuals improve their position in a ranking, design better ranking procedures, and check whether a procedure is legally compliant. In this paper, we present ShaRP -- Shapley for Rankings and Preferences -- a framework that explains the contributions of features to different aspects of a ranked outcome and is based on Shapley values. Using ShaRP, we show that even when the scoring function used by an algorithmic ranker is known and linear, the feature weights do not correspond to their Shapley value contribution. The contributions instead depend on the feature distributions and the subtle local interactions between the scoring features. ShaRP builds on the Quantitative Input Influence framework to compute the contributions of features for multiple -- ranking specific -- Quantities of Interest, including score, rank, pair-wise preference, and top-k. We show the results of an extensive experimental validation of ShaRP using real and synthetic datasets. We demonstrate that feature importance can be computed efficiently, and that ShaRP compares favorably to several prior local feature importance methods, in terms of both generality and quality of explanations. Among our results, we highlight a case study on the CS Rankings dataset. Contrary to expectation, we find that a strong track record in Systems research is much more important than AI research for placing a CS department among the top-10%. ShaRP is available at latex for matplotlib togetherhttps://github.com/DataResponsibly/ShaRP.
翻译:暂无翻译