Hidden Markov models (HMMs) are commonly used for disease progression modeling when the true patient health state is not fully known. Since HMMs may have multiple local optima, performance can be improved by incorporating additional patient covariates to inform estimation. To allow for this, we develop hidden Markov recurrent neural networks (HMRNNs), a special case of recurrent neural networks with the same likelihood function as a corresponding discrete-observation HMM. The HMRNN can be combined with any other predictive neural networks that take patient information as input, with all parameters estimated simultaneously via gradient descent. Using a dataset of Alzheimer's disease patients, we demonstrate how combining the HMRNN with other predictive neural networks improves disease forecasting performance and offers a novel clinical interpretation compared with a standard HMM trained via expectation-maximization.
翻译:隐藏的 Markov 模型(HMMs) 通常用于在真正的病人健康状况不完全为人所知的情况下进行疾病递进模型模型。 由于HMMs可能具有多重局部选择性,因此通过纳入更多的病人共变体来进行估算可以提高性能。为此,我们开发了隐藏的Markov 经常性神经网络(HMNNs),这是一个常见神经网络的特例,其可能性与相应的离散观测功能相同。 HMMNN可以与任何其他预测性神经网络相结合,这些网络将病人信息作为输入,同时通过梯度下降估算所有参数。我们使用阿尔茨海默氏病病人的数据集,演示HMMNN如何将HMN与其他预测性神经网络相结合,改善疾病预报性能,并提供新颖的临床解释,与通过预期-最大化培训的标准HMMMm(HMm)相比。