This expository note shows that the learning parities with noise (LPN) assumption is robust to weak dependencies in the noise distribution of small batches of samples. This provides a partial converse to the linearization technique of [AG11].
The key insights are:
The main result, Theorem 1.4, shows that for any constant batch size k and any δ-Santha-Vazirani source p over the batch noise, the standard LPN problem with noise level 1/2 - O(kδ) is polynomial-time reducible to learning parities with the batch noise distribution p. This provides a robustness guarantee for the LPN assumption in the face of small dependencies in the noise.
To Another Language
from source content
arxiv.org
Viktige innsikter hentet fra
by Noah Golowic... klokken arxiv.org 04-18-2024
https://arxiv.org/pdf/2404.11325.pdfDypere Spørsmål