The studies of large-scale, high-dimensional data in fields such as genomics and neuroscience have injected new insights into science. Yet, despite advances, they are confronting several challenges often simultaneously: non-linearity, slow computation, inconsistency and uncertain convergence, and small sample sizes compared to high feature dimensions. Here, we propose a relatively simple, scalable, and consistent nonlinear dimension reduction method that can potentially address these issues in unsupervised settings. We call this method Statistical Quantile Learning (SQL) because, methodologically, it leverages on a quantile approximation of the latent variables and standard nonparametric techniques (sieve or penalyzed methods). By doing so, we show that estimating the model originate from a convex assignment matching problem. Theoretically, we provide the asymptotic properties of SQL and its rates of convergence. Operationally, SQL overcomes both the parametric restriction in nonlinear factor models in statistics and the difficulty of specifying hyperparameters and vanishing gradients in deep learning. Simulation studies assent the theory and reveal that SQL outperforms state-of-the-art statistical and machine learning methods. Compared to its linear competitors, SQL explains more variance, yields better separation and explanation, and delivers more accurate outcome prediction when latent factors are used as predictors; compared to its nonlinear competitors, SQL shows considerable advantage in interpretability, ease of use and computations in high-dimensional settings.Finally, we apply SQL to high-dimensional gene expression data (consisting of 20263 genes from 801 subjects), where the proposed method identified latent factors predictive of five cancer types. The SQL package is available at https://github.com/jbodelet/SQL.
翻译:暂无翻译