The paper introduces a novel hybrid classical-quantum generative model called VAE-QWGAN that integrates a classical VAE with a quantum WGAN. The key highlights are:
VAE-QWGAN combines the VAE decoder and QGAN generator into a single quantum model with shared parameters, utilizing the VAE's encoder for latent vector sampling during training.
To generate new data from the trained model at inference, input latent vectors are sampled from a Gaussian Mixture Model (GMM) learned on the training latent vectors. This enhances the diversity and quality of the generated images.
The training process optimizes a combined loss function that balances the VAE reconstruction loss and the QGAN adversarial loss, with a weighing parameter to control the contribution of each.
Experimental evaluation on MNIST and Fashion-MNIST datasets shows that VAE-QWGAN outperforms the state-of-the-art PQWGAN in terms of Wasserstein distance, Jensen-Shannon Divergence, and Number of Distinct Bins, indicating improved quality and diversity of generated images.
The GMM-based inference further boosts the diversity of generated samples compared to using a simple Gaussian or uniform prior.
Overall, the VAE-QWGAN framework effectively leverages the strengths of classical and quantum generative models to address the challenges of high-dimensional image generation within the constraints of NISQ devices.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Aaron Mark T... kl. arxiv.org 09-17-2024
https://arxiv.org/pdf/2409.10339.pdfDybere Forespørgsler