Core Concepts
Parallel quantum annealing optimizes multiple problems simultaneously, offering speed-ups but potentially sacrificing individual solution quality.
Abstract
The study explores parallel quantum annealing's potential using DWaveSampler and LeapHybridSampler. It investigates the impact of problem size, normalization techniques, and custom embedding on solution quality, violations, time-to-solution, and variation.
I. Introduction
Traditional vs. parallel quantum annealing methods.
Objective: Optimize qubit utilization for multiple problems simultaneously.
II. Background
Quantum Annealing and QUBO formulation.
D-Wave Systems' role in solving optimization problems.
Benefits of parallel annealing over sequential approaches.
III. Proposed Methodology
Experiments with DWaveSampler: Default Embedding and Custom Embedding.
Use cases: ALM and TFO problems.
Comparison of solvers' performance based on metrics like SQV, TTS, violations.
IV. Normalization
Importance of normalization to balance problem magnitudes.
Techniques explored: square root, logarithm, scalar operations.
Impact on solution quality and error rates.
V. Results and Discussions
DWaveSampler (Default Embedding)
Effectiveness of parallel processing with varying problem sizes.
Custom embedding's impact on solution quality and computational efficiency.
LeapHybridSampler (Default Embedding)
Superior performance in optimizing solutions across different problem sizes compared to non-parallel runs.
Stats
Parallel quantum annealing aims to optimize the utilization of available qubits on a quantum topology by addressing multiple independent problems in a single annealing cycle.
The Time-to-Solution (TTS) metric indicates substantial speed-up compared to traditional quantum annealing methods when solving multiple NP-hard problems simultaneously.