Bayesian Networks may be appealing for clinical decision-making due to their inclusion of causal knowledge, but their practical adoption remains limited as a result of their inability to deal with unstructured data. While neural networks do not have this limitation, they are not interpretable and are inherently unable to deal with causal structure in the input space. Our goal is to build neural networks that combine the advantages of both approaches. Motivated by the perspective to inject causal knowledge while training such neural networks, this work presents initial steps in that direction. We demonstrate how a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network. Additionally, we propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network. We present initial results in a proof-of-concept setting, showing that the neural model acts as an understudy to its Bayesian Network counterpart, approximating its probabilistic and causal properties.
翻译:贝叶斯人网络可能因为包含因果知识而呼吁临床决策,但由于无法处理非结构性数据,实际采用这些网络的做法仍然有限。虽然神经网络没有这种局限性,但它们无法解释,而且本质上无法处理输入空间的因果结构。我们的目标是建立结合两种方法的优势的神经网络。从观点出发,在培训这种神经网络的同时注入因果知识,这项工作是朝这个方向迈出的第一步。我们展示了神经网络如何接受培训,以产生有条件的概率,提供与巴伊斯人网络大致相同的功能。此外,我们提出了两项培训战略,允许将从特定因果结构推断出来的独立关系编码到神经网络。我们在一个概念证明的设置中提出初步结果,表明神经模型作为对拜伊斯人网络对应方的暗中研究,对它的概率和因果特性进行匹配。