Vector autoregressions (VARs) have an associated order $p$; conditional on observations at the preceding $p$ time points, the variable at time $t$ is conditionally independent of all the earlier history. Learning the order of the model is therefore vital for its characterisation and subsequent use in forecasting. It is common to assume that a VAR is stationary. This prevents the predictive variance of the process from increasing without bound as the forecast horizon increases and facilitates interpretation of the relationships between variables. A VAR is stable if and only if the roots of its characteristic equation lie outside the unit circle, constraining the autoregressive coefficient matrices to lie in the stationary region. Unfortunately, the geometry of the stationary region is very complicated which impedes specification of a prior. In this work, the autoregressive coefficients are mapped to a set of transformed partial autocorrelation matrices which are unconstrained, allowing for straightforward prior specification, routine computational inference, and meaningful interpretation of the magnitude of the elements in the matrix. The multiplicative gamma process is used to build a prior for the unconstrained matrices, which encourages increasing shrinkage of the partial autocorrelation parameters as the lag increases. Identifying the lag beyond which the partial autocorrelations become equal to zero then determines the order of the process. Posterior inference is performed using Hamiltonian Monte Carlo via Stan. A truncation criterion is used to determine whether a partial autocorrelation matrix has been effectively shrunk to zero. The value of the truncation threshold is motivated by classical theory on the sampling distribution of the partial autocorrelation function. The work is applied to neural activity data in order to investigate ultradian rhythms in the brain.
翻译:暂无翻译