Technological advances in the past decade, hardware and software alike, have made access to high-performance computing (HPC) easier than ever. We review these advances from a statistical computing perspective. Cloud computing makes access to supercomputers affordable. Deep learning software libraries make programming statistical algorithms easy and enable users to write code once and run it anywhere -- from a laptop to a workstation with multiple graphics processing units (GPUs) or a supercomputer in a cloud. Highlighting how these developments benefit statisticians, we review recent optimization algorithms that are useful for high-dimensional models and can harness the power of HPC. Code snippets are provided to demonstrate the ease of programming. We also provide an easy-to-use distributed matrix data structure suitable for HPC. Employing this data structure, we illustrate various statistical applications including large-scale positron emission tomography and $\ell_1$-regularized Cox regression. Our examples easily scale up to an 8-GPU workstation and a 720-CPU-core cluster in a cloud. As a case in point, we analyze the onset of type-2 diabetes from the UK Biobank with 200,000 subjects and about 500,000 single nucleotide polymorphisms using the HPC $\ell_1$-regularized Cox regression. Fitting this half-million-variate model takes less than 45 minutes and reconfirms known associations. To our knowledge, this is the first demonstration of the feasibility of penalized regression of survival outcomes at this scale.
翻译:过去十年来,硬件和软件等科技进步都使高性能计算(HPC)比以往任何时候更容易获得高性能计算(HPC),我们从统计计算的角度来审查这些进步。我们从统计计算的角度来审查这些进步。云计算使获得超级计算机变得负担得起。深学习软件图书馆使编程统计算法变得容易,使用户能够写出代码一次,并在任何地方运行 -- -- 从一台膝上型计算机到多图形处理器(GPUs)或云层超级计算机的工作站。我们强调这些发展如何有利于统计人员,我们审查对高度模型有用的最新优化算法,并能够利用HPC的力量。我们提供了代码片以显示程序的简便性。我们还提供了一种便于使用的分布式矩阵数据结构,适合于HPC。使用这一数据结构,我们展示了各种统计应用,包括大型的Papegentron排放量和$_ell_1美元正常的考克斯回归。我们的例子很容易推广到8-GPUS的工作站和720-CPU核心在云层中的一种已知的组合。我们用20万个主题和50万个核心的货币回归后变后变化的模型来分析我们这个20万个主题和50万个硬化的模型的模型。