We present an efficient tensor-network-based approach for simulating large-scale quantum circuits, demonstrated using Quantum Support Vector Machines (QSVMs). Our method effectively reduces exponential runtime growth to near-quadratic scaling with respect to the number of qubits in practical scenarios. Traditional state-vector simulations become computationally infeasible beyond approximately 50 qubits; in contrast, our simulator successfully handles QSVMs with up to 784 qubits, completing simulations within seconds on a single high-performance GPU. Furthermore, by employing the Message Passing Interface (MPI) in multi-GPU environments, the approach shows strong linear scalability, reducing computation time as dataset size increases. We validate the framework on the MNIST and Fashion MNIST datasets, achieving successful multiclass classification and emphasizing the potential of QSVMs for high-dimensional data analysis. By integrating tensor-network techniques with high-performance computing resources, this work demonstrates both the feasibility and scalability of large-qubit quantum machine learning models, providing a valuable validation tool in the emerging Quantum-HPC ecosystem.
翻译:暂无翻译