A major family of sufficient dimension reduction (SDR) methods, called inverse regression, commonly require the distribution of the predictor $X$ to have a linear $E(X|\beta^\mathsf{T}X)$ and a degenerate $\mathrm{var}(X|\beta^\mathsf{T}X)$ for the desired reduced predictor $\beta^\mathsf{T}X$. In this paper, we adjust the first and second-order inverse regression methods by modeling $E(X|\beta^\mathsf{T}X)$ and $\mathrm{var}(X|\beta^\mathsf{T}X)$ under the mixture model assumption on $X$, which allows these terms to convey more complex patterns and is most suitable when $X$ has a clustered sample distribution. The proposed SDR methods build a natural path between inverse regression and the localized SDR methods, and in particular inherit the advantages of both; that is, they are $\sqrt{n}$-consistent, efficiently implementable, directly adjustable under the high-dimensional settings, and fully recovering the desired reduced predictor. These findings are illustrated by simulation studies and a real data example at the end, which also suggest the effectiveness of the proposed methods for nonclustered data.
翻译:暂无翻译