핵심 개념
Revolutionizing facial makeup with Data Amplify Learning and TinyBeauty model.
초록
The content introduces a novel approach to facial makeup using Data Amplify Learning (DAL) and the TinyBeauty model. It addresses challenges in accurate supervision, sophisticated facial prompts, and low-cost deployment on mobile devices. The core idea of DAL is to amplify limited images for training, enabling accurate pixel-to-pixel learning. The Residual Diffusion Model and Fine-Grained Makeup Module in the Diffusion-based Data Amplifier enhance makeup control and identity preservation. TinyBeauty achieves state-of-the-art performance with minimal parameters and remarkable inference speed on mobile devices.
Directory:
Introduction
Challenges in facial makeup applications on mobile devices.
Data Amplify Learning (DAL)
Proposal of a new learning paradigm.
Core idea of DAL using a Diffusion-based Data Amplifier.
Diffusion-based Data Amplifier (DDA)
Residual Diffusion Model for high-fidelity texture preservation.
Fine-Grained Makeup Module for precise makeup style management.
TinyBeauty Model
Network architecture designed for resource-constrained devices.
Eyeliner loss implementation for clear eyeliner details.
Experiments
Evaluation metrics, datasets, and implementation details.
Comparison & Results
Comparative analysis with competing methods.
Conclusion & Future Work
통계
Two pivotal innovations in DDA facilitate training: Residual Diffusion Model (RDM) and Fine-Grained Makeup Module (FGMM).
Extensive experiments show that DAL can produce highly competitive makeup models using only 5 image pairs.
인용구
"TinyBeauty necessitates merely 80K parameters to achieve a state-of-the-art performance without intricate face prompts."
"DAL greatly relaxes the optimization methods, allowing us to abandon face prompts and over-parameterization methods."