The paper discusses the problem of distributed learning (DL) in the presence of stragglers and proposes a novel DL method based on 1-bit gradient coding (1-bit GC-DL) to reduce communication burden. The method distributes training data redundantly, computes gradients locally, quantizes them into 1-bit vectors, and transmits them to peers. Theoretical convergence guarantees are provided for both convex and non-convex loss functions. Empirical results show improved performance compared to baseline methods.
翻译成其他语言
从原文生成
arxiv.org
更深入的查询