Pseudorandom number generators (PRNGs) are ubiquitous in stochastic simulations and machine learning (ML), where they drive sampling, parameter initialization, regularization, and data shuffling. While widely used, the potential impact of PRNG statistical quality on computational results remains underexplored. In this study, we investigate whether differences in PRNG quality, as measured by standard statistical test suites, can influence outcomes in representative stochastic applications. Seven PRNGs were evaluated, ranging from low-quality linear congruential generators (LCGs) with known statistical deficiencies to high-quality generators such as Mersenne Twister, PCG, and Philox. We applied these PRNGs to four distinct tasks: an epidemiological agent-based model (ABM), two independent from-scratch MNIST classification implementations (Python/NumPy and C++), and a reinforcement learning (RL) CartPole environment. Each experiment was repeated 30 times per generator using fixed seeds to ensure reproducibility, and outputs were compared using appropriate statistical analyses. Results show that very poor statistical quality, as in the ''bad'' LCG failing 125 TestU01 Crush tests, produces significant deviations in ABM epidemic dynamics, reduces MNIST classification accuracy, and severely degrades RL performance. In contrast, mid-and good-quality LCGs-despite failing a limited number of Crush or BigCrush tests-performed comparably to top-tier PRNGs in most tasks, with the RL experiment being the primary exception where performance scaled with statistical quality. Our findings indicate that, once a generator meets a sufficient statistical robustness threshold, its family or design has negligible impact on outcomes for most workloads, allowing selection to be guided by performance and implementation considerations. However, the use of low-quality PRNGs in sensitive stochastic computations can introduce substantial and systematic errors.
翻译:暂无翻译