Many scientific problems require identifying a small set of covariates that are associated with a target response and estimating their effects. Often, these effects are nonlinear and include interactions, so linear and additive methods can lead to poor estimation and variable selection. The Bayesian framework makes it straightforward to simultaneously express sparsity, nonlinearity, and interactions in a hierarchical model. But, as for the few other methods that handle this trifecta, inference is computationally intractable - with runtime at least quadratic in the number of covariates, and often worse. In the present work, we solve this computational bottleneck. We first show that suitable Bayesian models can be represented as Gaussian processes (GPs). We then demonstrate how a kernel trick can reduce computation with these GPs to O(# covariates) time for both variable selection and estimation. Our resulting fit corresponds to a sparse orthogonal decomposition of the regression function in a Hilbert space (i.e., a functional ANOVA decomposition), where interaction effects represent all variation that cannot be explained by lower-order effects. On a variety of synthetic and real datasets, our approach outperforms existing methods used for large, high-dimensional datasets while remaining competitive (or being orders of magnitude faster) in runtime.
翻译:许多科学问题都要求找出与目标反应相关联并估计其效果的一小组共变体。 这些效应往往是非线性效应,包括互动,因此线性和添加方法可以导致低估计和变量选择。 贝叶斯框架使得在分级模型中,既可以直截了当地同时表达超度、非线性和交互性。 但是,正如处理这一三点的少数其他方法一样, 推论在计算上是难以处理的, 其运行时间至少是共变数的四方形, 并且往往更差。 在目前的工作中, 我们解决了这个计算瓶颈。 我们首先显示, 合适的贝叶斯模型可以作为高调过程( Gossian processs) 。 然后我们演示了如何用这些内核计算到 O (# 共变数) 的时间来减少变量选择和估计的计算方法。 由此产生的推论与Hilbert 空间( i. e., 功能性 ANOVA decomposi) 回归函数的稀少或分解。 在目前的工作中, 互动效应代表着所有变异性的变化, 都无法用低级效应来解释, 由高阶效果来解释, 高阶影响。 在使用高位方法中, 运行中, 我们的合成数据系系系系系系系系系系系中, 系系系系系系系系中, 高位数据, 系系系系系系系系系系系, 。