The content presents a framework for distributed computation over a quantum network, where data is encoded into specialized quantum states. It is shown that for models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, while maintaining relatively modest overhead.
The key insights are:
Even for simple distributed quantum circuits, there is an exponential quantum advantage in communication for the problems of estimating the loss and the gradients of the loss with respect to the parameters. This advantage also implies improved privacy of the user data and model parameters.
A class of models that can efficiently approximate certain graph neural networks is studied. These models maintain the exponential communication advantage and achieve performance comparable to standard classical models on common node and graph classification benchmarks.
For certain distributed circuits, there is an exponential advantage in communication for the entire training process, not just for a single round of gradient estimation. This includes circuits for fine-tuning using pre-trained features.
The ability to interleave multiple unitaries encoding nonlinear features of data enables expressivity to grow exponentially with depth, and universal function approximation in some settings. This contrasts with the popular belief about linear restrictions in quantum neural networks.
The results form a promising foundation for distributed machine learning over quantum networks, with potential applications in settings where communication constraints are a bottleneck, and where privacy of data and model parameters is desirable.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Dar Gilboa, ... klo arxiv.org 09-30-2024
https://arxiv.org/pdf/2310.07136.pdfSyvällisempiä Kysymyksiä