The paper discusses the problem of distributed learning (DL) in the presence of stragglers and proposes a novel DL method based on 1-bit gradient coding (1-bit GC-DL) to reduce communication burden. The method distributes training data redundantly, computes gradients locally, quantizes them into 1-bit vectors, and transmits them to peers. Theoretical convergence guarantees are provided for both convex and non-convex loss functions. Empirical results show improved performance compared to baseline methods.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問