Principal component analysis (PCA) is often used to analyze multivariate data together with cluster analysis, which depends on the number of principal components used. It is therefore important to determine the number of significant principal components (PCs) extracted from a data set. Here we use a variational Bayesian version of classical PCA, to develop a new method for estimating the number of significant PCs in contexts where the number of samples is of a similar to or greater than the number of features. This eliminates guesswork and potential bias in manually determining the number of principal components and avoids overestimation of variance by filtering noise. This framework can be applied to datasets of different shapes (number of rows and columns), different data types (binary, ordinal, categorical, continuous), and with noisy and missing data. Therefore, it is especially useful for data with arbitrary encodings and similar numbers of rows and columns, such as cultural, ecological, morphological, and behavioral datasets. We tested our method on both synthetic data and empirical datasets and found that it may underestimate but not overestimate the number of principal components for the synthetic data. A small number of components was found for each empirical dataset. These results suggest that it is broadly applicable across the life sciences.
翻译:暂无翻译