Quantum Computing (QC) promises computational speedup over classic computing for solving some complex problems. However, noise exists in current and near-term quantum computers. Quantum software testing (for gaining confidence in quantum software's correctness) is inevitably impacted by noise, to the extent that it is impossible to know if a test case failed due to noise or real faults. Existing testing techniques test quantum programs without considering noise, i.e., by executing tests on ideal quantum computer simulators. Consequently, they are not directly applicable to testing quantum software on real QC hardware or noisy simulators. To this end, we propose a noise-aware approach (named QOIN) to alleviate the noise effect on test results of quantum programs. QOIN employs machine learning techniques (e.g., transfer learning) to learn the noise effect of a quantum computer and filter it from a quantum program's outputs. Such filtered outputs are then used as the input to perform test case assessments (determining the passing or failing of a test case execution against a test oracle). We evaluated QOIN on IBM's 23 noise models with nine real-world quantum programs and 1000 artificial quantum programs. We also generated faulty versions of these programs to check if a failing test case execution can be determined under noise. Results show that QOIN can reduce the noise effect by more than $80\%$. To check QOIN's effectiveness for quantum software testing, we used an existing test oracle for quantum software testing. The results showed that the F1-score of the test oracle was improved on average by $82\%$ for six real-world programs and by $75\%$ for 800 artificial programs, demonstrating that QOIN can effectively learn noise patterns and enable noise-aware quantum software testing.
翻译:暂无翻译