We propose and study a framework for quantifying the importance of the choices of parameter values to the result of a query over a database. These parameters occur as constants in logical queries, such as conjunctive queries. In our framework, the importance of a parameter is its SHAP score - a popular instantiation of the game-theoretic Shapley value to measure the importance of feature values in machine learning models. We make the case for the rationale of using this score by explaining the intuition behind SHAP, and by showing that we arrive at this score in two different, apparently opposing, approaches to quantifying the contribution of a parameter. The application SHAP requires two components in addition to the query and the database: (a) a probability distribution over the combinations of parameter values, and (b) a utility function that measures the similarity between the result for the original parameters and the result for hypothetical parameters. The main question addressed in the paper is the complexity of calculating the SHAP score for different distributions and similarity measures. In particular, we devise polynomial-time algorithms for the case of full acyclic conjunctive queries for certain (natural) similarity functions. We extend our results to conjunctive queries with parameterized filters (e.g., inequalities between variables and parameters). We also illustrate the application of our results to "why-not" explanations (aiming to explain the absence of a query answer), where we consider the task of quantifying the contribution of query components to the elimination of a non-answer in consideration. Finally, we discuss a simple approximation technique for the case of correlated parameters.
翻译:暂无翻译