LNPT: Label-free Network Pruning and Training Study
Основные понятия
Pruning before training enables efficient deployment of neural networks on smart devices.
Аннотация
The study introduces LNPT, a novel learning framework for network pruning and training without labeled data. It addresses the inconsistency between weight norms and generalization during training processes. LNPT leverages the concept of the learning gap to enhance generalization performance. Experiments demonstrate its superiority over supervised training methods.
I. Abstract:
Pruning before training allows neural networks on smart devices.
Learning gap correlates with generalization performance.
II. Introduction:
Deep learning algorithms in smart devices face computational constraints.
Pruning enables network deployment on resource-constrained devices.
III. Method:
Notations defined for network parameters and feature maps.
Learning gap introduced to improve generalization performance.
IV. Experiment:
LNPT evaluated against state-of-the-art methods on various datasets.
Superior performance demonstrated at high compression rates.
V. Conclusion:
Proposed learning gap provides insights into sparse learning theory.
LNPT enables adaptive pruning and training without labels.
LNPT
Статистика
Pruning before training enables the deployment of neural networks on smart devices.
Experiments show that the learning gap aligns with variations in generalization performance.
Цитаты
"Pruning before training enables mature networks on the cloud to provide online guidance for network pruning and learning on smart devices."
"Our results demonstrate the superiority of this approach over supervised training."