We present a generalized balancing method -- stable weights via Neural Gibbs Density -- fully available for estimating causal effects for an arbitrary mixture of discrete and continuous interventions. Our weights are trainable through back-propagation and can be obtained with neural network algorithms. In addition, we also provide a method to measure the performance of our weights by estimating the mutual information for the balanced distribution. Our method is easy to implement with any present deep learning libraries, and the weights from it can be used in most state-of-art supervised algorithms.
翻译:我们提出了一个普遍平衡的方法 -- -- 通过Neural Gibbs Density的稳定加权 -- -- 完全可以用来估计离散和连续干预任意混合的因果关系。我们的加权可以通过反向传播来训练,并且可以通过神经网络算法获得。此外,我们还提供了一种方法,通过估计相互信息来衡量我们加权的性能,以便均衡分布。我们的方法很容易与任何现有的深层学习图书馆一起实施,其加权可以用于大多数最先进的监督算法。