Machine learning models are increasingly used in areas such as loan approvals and hiring, yet they often function as black boxes, obscuring their decision-making processes. Transparency is crucial, and individuals need explanations to understand decisions, especially for the ones not desired by the user. Ethical and legal considerations require informing individuals of changes in input attribute values (features) that could lead to a desired outcome for the user. Our work aims to generate counterfactual explanations by considering causal dependencies between features. We present the CoGS (Counterfactual Generation with s(CASP)) framework that utilizes the goal-directed Answer Set Programming system s(CASP) to generate counterfactuals from rule-based machine learning models, specifically the FOLD-SE algorithm. CoGS computes realistic and causally consistent changes to attribute values taking causal dependencies between them into account. It finds a path from an undesired outcome to a desired one using counterfactuals. We present details of the CoGS framework along with its evaluation.
翻译:暂无翻译