The paper discusses the problem of distributed learning (DL) in the presence of stragglers and proposes a novel DL method based on 1-bit gradient coding (1-bit GC-DL) to reduce communication burden. The method distributes training data redundantly, computes gradients locally, quantizes them into 1-bit vectors, and transmits them to peers. Theoretical convergence guarantees are provided for both convex and non-convex loss functions. Empirical results show improved performance compared to baseline methods.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Chengxi Li,M... في arxiv.org 03-25-2024
https://arxiv.org/pdf/2403.14716.pdfاستفسارات أعمق