Repulsive mixture models have recently gained popularity for Bayesian cluster detection. Compared to more traditional mixture models, repulsive mixture models produce a smaller number of well separated clusters. The most commonly used methods for posterior inference either require to fix a priori the number of components or are based on reversible jump MCMC computation. We present a general framework for mixture models, when the prior of the `cluster centres' is a finite repulsive point process depending on a hyperparameter, specified by a density which may depend on an intractable normalizing constant. By investigating the posterior characterization of this class of mixture models, we derive a MCMC algorithm which avoids the well-known difficulties associated to reversible jump MCMC computation. In particular, we use an ancillary variable method, which eliminates the problem of having intractable normalizing constants in the Hastings ratio. The ancillary variable method relies on a perfect simulation algorithm, and we demonstrate this is fast because the number of components is typically small. In several simulation studies and an application on sociological data, we illustrate the advantage of our new methodology over existing methods, and we compare the use of a determinantal or a repulsive Gibbs point process prior model.
翻译:与较传统的混合物模型相比, 令人厌恶的混合物模型产生数量较少的分离型群。 最常用的后生推论方法要么需要先验地修正组件数量,要么基于可逆跳动MCMC计算。 我们为混合物模型提出了一个总体框架, 之前的“ 聚集中心” 是一个取决于超参数的有限可逆点过程, 取决于一个可能取决于难以调和的常数的密度。 通过调查这一类混合模型的后代特征, 我们得出一个MCMC的算法, 避免了与可逆跳动 MMC计算相关的众所周知的困难。 特别是, 我们使用一个辅助变量方法, 消除黑斯廷比中固定常数常数的问题。 辅助变量方法依赖于一个完美的模拟算法, 我们证明这是很快的, 因为组件数量通常很小。 在几个模拟研究和对社会学数据的应用中, 我们展示了我们新的方法相对于现有方法的优势, 我们比较了使用一个前定点或前定点。