Traditional hidden Markov models have been a useful tool to understand and model stochastic dynamic linear data; in the case of non-Gaussian data or not linear in mean data, models such as mixture of Gaussian hidden Markov models suffer from the computation of precision matrices and have a lot of unnecessary parameters. As a consequence, such models often perform better when it is assumed that all variables are independent, a hypothesis that may be unrealistic. Hidden Markov models based on kernel density estimation is also capable of modeling non Gaussian data, but they assume independence between variables. In this article, we introduce a new hidden Markov model based on kernel density estimation, which is capable of introducing kernel dependencies using context-specific Bayesian networks. The proposed model is described, together with a learning algorithm based on the expectation-maximization algorithm. Additionally, the model is compared with related HMMs using synthetic and real data. From the results, the benefits in likelihood and classification accuracy from the proposed model are quantified and analyzed.
翻译:传统的隐蔽马尔科夫模型是理解和模拟随机动态线性数据的有用工具;在非高加索数据或中数据非线性数据的情况下,诸如高森隐性马尔科夫模型混合物等模型因精确矩阵的计算而受到影响,并有许多不必要的参数,因此,如果假设所有变量都是独立的,这种模型往往效果更好,这种假设可能不切实际。基于内核密度估计的隐藏马尔科夫模型也能够建模非高斯数据,但它们在变量之间具有独立性。在本篇文章中,我们采用了一个新的基于内核密度估计的隐性马尔科夫模型,该模型能够利用特定背景的贝叶斯网络引入内核依赖性。所拟议的模型与基于期望-最大化算法的学习算法一起被描述。此外,模型与使用合成和真实数据的相关HMM值进行了比较。根据结果,对拟议模型的可能性和分类准确性进行了量化和分析。