Core Concepts
EdgeOL optimizes inference accuracy, fine-tuning execution time, and energy efficiency through inter-tuning and intra-tuning optimizations.
Abstract
Abstract:
Emerging applications use DNNs on edge devices.
Immediate online learning is energy inefficient.
EdgeOL reduces fine-tuning time by 64% and energy consumption by 52%.
Introduction:
DNN models on edge devices need adaptiveness and energy efficiency.
Immediate online learning guarantees high accuracy but is less energy efficient.
Challenges and Opportunities:
Fine-tuning frequency affects execution time, energy consumption, and accuracy.
Layer freezing can reduce computation costs without compromising accuracy.
EdgeOL Design:
DAF dynamically adjusts fine-tuning frequency based on training data availability, inference intensity, and scenario changes.
SimFreeze freezes layers based on self-representational similarity to guide layer freezing.
Utilizing Unlabeled Data:
Semi-supervised learning technique used to make use of both labeled and unlabeled data.
Stats
平均的なファインチューニング実行時間を64%削減します。
エネルギー消費を52%削減します。