Core Concepts
Adaptive network pruning in Denoising Diffusion Step-aware Models enhances efficiency in data generation.
Abstract
The paper introduces Denoising Diffusion Step-aware Models (DDSM) as a novel framework to address the computational overhead in data generation. By adaptively varying neural network sizes based on the importance of each generative step, DDSM achieves significant computational savings without compromising quality. The method is compatible with existing acceleration techniques and demonstrates superior efficiency across various datasets.
1. Introduction
DDPMs for high-quality sample generation.
Efficiency bottleneck due to whole-network computation.
DDSM employs adaptive network pruning for efficiency.
2. Core Concept: Adaptive Network Pruning
Varying neural network sizes based on step importance.
Evolutionary search determines optimal network variations.
Seamless integration with other acceleration techniques.
3. Experimental Validation
Empirical evaluations on CIFAR-10, CelebA-HQ, LSUN-bedroom, AFHQ, and ImageNet.
Computational savings of 49% to 76% achieved without compromising quality.
Dataset attributes influence network size distribution.
Stats
Empirical evaluations demonstrate computational savings of 49% for CIFAR-10, 61% for CelebA-HQ, 59% for LSUN-bedroom, 71% for AFHQ, and 76% for ImageNet.