As quantum computing advances toward early fault-tolerant machines, testing and verification of quantum programs become urgent but costly, since each execution consumes scarce hardware resources. Unlike in classical software testing, every measurement must be carefully budgeted. This paper develops a unified framework for reasoning about how many measurements are required to verify quantum programs. The goal is to connect theoretical error bounds with concrete test strategies and to extend the analysis from individual tests to full program-level verification. We analyze the relationship between error probability, fidelity, trace distance, and the quantum Chernoff bound to establish fundamental shot count limits. These foundations are applied to three representative testing methods: the inverse test, the swap test, and the chi-square test. Both idealized and noisy devices are considered. We also introduce a program-level budgeting approach that allocates verification effort across multiple subroutines. The inverse test is the most measurement efficient, the swap test requires about twice as many shots, and the chi-square test is easiest to implement but often needs orders of magnitude more measurements. In the presence of noise, calibrated baselines may increase measurement requirements beyond theoretical estimates. At the program level, distributing a global fidelity target across many fine-grained functions can cause verification costs to grow rapidly, whereas coarser decompositions or weighted allocations remain more practical. The framework clarifies trade-offs among different testing strategies, noise handling, and program decomposition. It provides practical guidance for budgeting measurement shots in quantum program testing, helping practitioners balance rigour against cost when designing verification strategies.
翻译:暂无翻译