Critical goals of scientific computing are to increase scientific rigor, reproducibility, and transparency while keeping up with ever-increasing computational demands. This work presents an integrated framework well-suited for data processing and analysis spanning individual, on-premises, and cloud environments. This framework leverages three well-established DevOps tools: 1) Git repositories linked to 2) CI/CD engines operating on 3) containers. It supports the full life-cycle of scientific data workflows with minimal friction between stages--including solutions for researchers who generate data. This is achieved by leveraging a single container that supports local, interactive user sessions and deployment in HPC or Kubernetes clusters. Combined with Git repositories integrated with CI/CD, this approach enables decentralized data pipelines across multiple, arbitrary computational environments. This framework has been successfully deployed and validated within our research group, spanning experimental acquisition systems and computational clusters with open-source, purpose-built GitLab CI/CD executors for slurm and Google Kubernetes Engine Autopilot. Taken together, this framework can increase the rigor, reproducibility, and transparency of compute-dependent scientific research.
翻译:暂无翻译