In this work, we characterize two data piling phenomena for a high-dimensional binary classification problem with general heterogeneous covariance models, which were previously characterized only for restricted homogeneous covariances. The data piling refers to the phenomenon where projections of the training data onto a direction vector have exactly two distinct values, one for each class. This first data piling phenomenon occurs for any data when the dimension $p$ is larger than the sample size $n$. We show that the second data piling phenomenon, which refers to a data piling of independent test data, can occur in an asymptotic context where $p$ grows while $n$ is fixed. We further show that a second maximal data piling direction, which gives an asymptotic maximal distance between the two piles of independent test data, can be obtained by projecting the first maximal data piling direction onto the nullspace of the common leading eigenspace. Based on the second data piling phenomenon, we propose various linear classification rules which ensure perfect classification of high-dimension low-sample-size data under generalized heterogeneous spiked covariance models.
翻译:暂无翻译