Core Concepts
The author proposes a Cloud-Edge Elastic Model Adaptation paradigm to improve model adaptation efficiency by leveraging cloud and edge resources effectively.
Abstract
The conventional deep learning paradigm involves training models on servers and deploying them to edge devices. However, distribution shifts in real-world scenarios can degrade performance. The proposed Cloud-Edge Elastic Model Adaptation (CEMA) addresses this by allowing online adaptation of edge models with reduced communication burden. By excluding unnecessary samples and using knowledge distillation, CEMA achieves better performance than traditional methods.
Key points:
- Traditional deep learning deployment pipeline.
- Challenges of distribution shifts in real-world scenarios.
- Introduction of Cloud-Edge Elastic Model Adaptation (CEMA).
- Two criteria for reducing communication burden: dynamic exclusion of unreliable samples and low-informative sample exclusion.
- Utilization of foundation model for guiding edge model adaptation through knowledge distillation.
- Experimental results showing the effectiveness of CEMA on ImageNet-C and ImageNet-R datasets.
Stats
Extensive experimental results on ImageNet-C and ImageNet-R verify the effectiveness of our CEMA.
Quotes
"Our CEMA greatly reduces the communication burden."
"To leverage rich knowledge in the foundation model, we use it to guide the edge model via knowledge distillation for adaptation."