The ability to handle a large volume of data generated by scientific applications is crucial. We have seen an increase in the heterogeneity of storage technologies available to scientific applications, such as burst buffers, local temporary block storage, managed cloud parallel file systems (PFS), and non-POSIX object stores. However, scientific applications designed for traditional HPC systems can not easily exploit those storage systems due to cost, throughput, and programming model challenges. We present iFast, a new library-level approach to transparently accelerating scientific applications based on MPI-IO. It decouples application I/O, data caching, and data storage to support heterogeneous storage models. Design decisions of iFast are based on a strong emphasis on deployability. It is highly general with only MPI as a core dependency, allowing users to run unmodified MPI-based applications with unmodified MPI implementations - even proprietary ones like IntelMPI and Cray MPICH. Our approach supports a wide range of networked storage, including traditional PFS, ordinary NFS, and S3-based cloud storage. Unlike previous approaches, iFast ensures crash consistency even across compute nodes. We demonstrate iFast in cloud HPC platform, small local cluster, and hybrid of both to show its generality. Our results show that iFast reduces end-to-end execution time by 13-26% for three popular scientific applications on the cloud. It also outperforms the state-of-the-art system, SymphonyFS, a filesystem-based approach for similar goals but without crash consistency, by 12-23%.
翻译:暂无翻译