Core Concepts
Efficiently accelerating federated learning on resource-constrained edge devices through adaptive model partitioning and bandwidth allocation.
Abstract
Introduction to Edge AI:
Traditional AI training in centralized cloud environments faces challenges like high communication costs and privacy risks.
Edge AI emphasizes training models on edge devices closer to data sources.
Challenges in Edge AI:
Federated Learning (FL) enables collaborative model training while preserving user privacy.
Split learning partitions models for training on low-resource devices.
Parallel Split Learning:
Combines benefits of FL and split learning but neglects resource heterogeneity of edge devices.
EdgeSplit Framework:
Proposes adaptive model partitioning for heterogeneous edge devices, optimizing resource utilization and task orchestration.
Model Splitting and Bandwidth Allocation:
Formulates a task scheduling problem to minimize training time by optimizing model splitting and bandwidth allocation.
Experimental Evaluation:
Tests show EdgeSplit outperforms baselines, achieving up to a 5.5x speed improvement in training large DNN models.
Stats
"Comprehensive tests conducted with a range of DNN models and datasets demonstrate that EdgeSplit not only facilitates the training of large models on resource-restricted edge devices but also surpasses existing baselines in performance."
"Our proposed EdgeSplit can achieve up to 5.5x training speed improvement."