Several structure-learning algorithms for staged trees, asymmetric extensions of Bayesian networks, have been proposed. However, these either do not scale efficiently as the number of variables considered increases, a priori restrict the set of models, or they do not find comparable models to existing methods. Here, we define an alternative algorithm based on a totally ordered hyperstage. We demonstrate how it can be used to obtain a quadratically-scaling structural learning algorithm for staged trees that restricts the model space a-posteriori. Through comparative analysis, we show that through the ordering provided by the mean posterior distributions, we can outperform existing methods in both computational time and model score. This method also enables us to learn more complex relationships than existing model selection techniques by expanding the model space and illustrates how this can embellish inferences in a real study.
翻译:已经提出了若干种分阶段树木的结构学习算法,即巴耶斯网络的不对称扩展,但是,这些算法不是随着变量数量被考虑的增加而效率不高,而是先验地限制一套模型,或者它们找不到与现有方法相类似的模型。在这里,我们根据一个完全订购的超级阶段来定义一种替代算法。我们展示了如何利用它来为限制模型空间的分阶段树木获得一个二次缩放的结构学习算法。通过比较分析,我们表明,通过平均后传分布提供的排序,我们可以在计算时间和模型评分两方面都超过现有方法。这种方法还使我们能够通过扩大模型空间来学习比现有模型选择技术更复杂的关系,并展示这如何在实际研究中产生推论。