Block-based resampling estimators have been intensively investigated for weakly dependent time processes, which has helped to inform implementation (e.g., best block sizes). However, little is known about resampling performance and block sizes under strong or long-range dependence. To establish guideposts in block selection, we consider a broad class of strongly dependent time processes, formed by a transformation of a stationary long-memory Gaussian series, and examine block-based resampling estimators for the variance of the prototypical sample mean; extensions to general statistical functionals are also considered. Unlike weak dependence, the properties of resampling estimators under strong dependence are shown to depend intricately on the nature of non-linearity in the time series (beyond Hermite ranks) in addition the long-memory coefficient and block size. Additionally, the intuition has often been that optimal block sizes should be larger under strong dependence (say $O(n^{1/2})$ for a sample size $n$) than the optimal order $O(n^{1/3})$ known under weak dependence. This intuition turns out to be largely incorrect, though a block order $O(n^{1/2})$ may be reasonable (and even optimal) in many cases, owing to non-linearity in a long-memory time series. While optimal block sizes are more complex under long-range dependence compared to short-range, we provide a consistent data-driven rule for block selection, and numerical studies illustrate that the guides for block selection perform well in other block-based problems with long-memory time series, such as distribution estimation and strategies for testing Hermite rank.
翻译:对于依赖性弱的时间过程(例如,最佳区块大小),已经对基于区块的抽查点进行了深入调查,这有助于为执行工作提供信息。然而,在强劲或长期依赖性强的情况下,很少有人知道对业绩和区块大小进行重新抽样的情况。为了在区块选择中建立指针柱,我们考虑了一个高度依赖性很强的时间过程的广泛类别,这是由固定的长长的高斯序列转换而形成的,并检查基于区块的抽查点,以弥补原样样本平均值的差异;还考虑一般统计功能的扩展。与依赖性弱不同,重现区块大小的估测点在强烈依赖性之下,其性质显示得错综复杂地取决于时间序列(超越赫米特级)中的非线性性质。此外,人们通常会认为,最优的区块规模应该比强依赖性大(从美元(n=1/2美元),用于样本规模的估测值为美元),比最短的顺序(n=1/3美元)更短的内程,在较弱的区块选择性序列下,在不可靠地进行这种测试时值不固定的(xxx 。