Causal inference and model interpretability research are gaining increasing attention, especially in the domains of healthcare and bioinformatics. Despite recent successes in this field, decorrelating features under nonlinear environments with human interpretable representations has not been adequately investigated. To address this issue, we introduce a novel method with a variable decorrelation regularizer to handle both linear and nonlinear confounding. Moreover, we employ association rules as new representations using association rule mining based on the original features to further proximate human decision patterns to increase model interpretability. Extensive experiments are conducted on four healthcare datasets (one synthetically generated and three real-world collections on different diseases). Quantitative results in comparison to baseline approaches on parameter estimation and causality computation indicate the model's superior performance. Furthermore, expert evaluation given by healthcare professionals validates the effectiveness and interpretability of the proposed model.
翻译:尽管最近在这一领域取得了一些成功,但没有对非线性环境中与人类可解释的表述有关的装饰特征进行充分调查。为了解决这一问题,我们采用了一种新颖的方法,采用可变的装饰调节器来处理线性和非线性混杂问题。此外,我们采用联系规则作为新的表述方式,利用基于原始特征的联系规则采矿来进一步接近人类决策模式来增加模型可解释性。对四个保健数据集进行了广泛的实验(一个合成生成的数据集和三个不同疾病真实世界集)。比较参数估计和因果关系计算基准方法的定量结果表明模型的优异性。此外,保健专业人员提供的专家评价证实了拟议模型的有效性和可解释性。