As the quantum computing community gravitates towards understanding the practical benefits of quantum computers, having a clear definition and evaluation scheme for assessing practical quantum advantage in the context of specific applications is paramount. Generative modeling, for example, is a widely accepted natural use case for quantum computers, and yet has lacked a concrete approach for quantifying success of quantum models over classical ones. In this work, we construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance. Using the sample-based approach proposed here, any generative model, from state-of-the-art classical generative models such as GANs to quantum models such as Quantum Circuit Born Machines, can be evaluated on the same ground on a concrete well-defined framework. In contrast to other sample-based metrics for probing practical generalization, we leverage constrained optimization problems (e.g., cardinality-constrained problems) and use these discrete datasets to define specific metrics capable of unambiguously measuring the quality of the samples and the model's generalization capabilities for generating data beyond the training set but still within the valid solution space. Additionally, our metrics can diagnose trainability issues such as mode collapse and overfitting, as we illustrate when comparing GANs to quantum-inspired models built out of tensor networks. Our simulation results show that our quantum-inspired models have up to a $68 \times$ enhancement in generating unseen unique and valid samples compared to GANs, and a ratio of 61:2 for generating samples with better quality than those observed in the training set. We foresee these metrics as valuable tools for rigorously defining practical quantum advantage in the domain of generative modeling.
翻译:暂无翻译