Latent feature representation methods play an important role in the dimension reduction and statistical modeling of high-dimensional complex data objects. However, existing approaches to assess the quality of these methods often rely on aggregated statistics that reflect the central tendency of the distribution of information losses, such as average or total loss, which can mask variation across individual observations. We argue that controlling average performance is insufficient to guarantee that statistical analysis in the latent space reflects the data-generating process and instead advocate for controlling the worst-case generalization error, or a tail quantile of the generalization error distribution. Our framework, CLaRe (Compact near-lossless Latent Representations), introduces a systematic way to balance compactness of the representation with preservation of information when assessing and selecting among latent feature representation methods. To facilitate the application of the CLaRe framework, we have developed GLaRe (Graphical Analysis of Latent Representations), an open-source R package that implements the framework and provides graphical summaries of the full generalization error distribution. We demonstrate the utility of CLaRe through three case studies on high-dimensional datasets from diverse fields of application. We apply the CLaRe framework to select among principal components, wavelets and autoencoder representations for each dataset. The case studies reveal that the optimal latent feature representation varies depending on dataset characteristics, emphasizing the importance of a flexible evaluation framework.
翻译:暂无翻译