Core Concepts
Inpainting faces using the periocular region through a new GAN-based model called E2F-Net achieves high-quality results with minimal training.
Abstract
The article introduces the E2F-Net model for face inpainting using the periocular region. It discusses the challenges of face inpainting, the proposed approach, and its benefits. The paper outlines the methodology, datasets used, training details, and comparison with other state-of-the-art methods.
-
Introduction
- Face inpainting is crucial for applications like face recognition in occluded scenarios.
- Challenges include preserving identity characteristics and producing realistic visuals.
-
Background and Related Work
- Overview of research on face inpainting, latent space embedding, and GAN inversion.
-
Limitations of Related Works and Our Contributions
- Discusses limitations of existing methods and introduces the novel E2F-Net approach.
-
Proposed Method
- Details the architecture of E2F-Net including encoders, mapping network, StyleGAN generator, discriminator, and optimization process.
-
Experiments
- Evaluation metrics include statistical measures like ℓ1 loss, PSNR, SSIM, FID, TV; Identity metric FNMR is used to assess ID preservation.
-
Datasets
- Description of seven generated datasets used for training and evaluation purposes.
-
Comparison Methods
- Comparison with four state-of-the-art methods: PIC, EC, LaFIn, E2F-GAN trained on E2F-CelebA-HQ dataset.
-
Evaluation Metrics
- Detailed explanation of statistical metrics (ℓ1 loss, PSNR, SSIM) and identity metric (FNMR).
-
Implementation Details
- Training setup using StyleGAN pre-trained at 256x256 resolution with Adam optimizer on NVIDIA GeForce RTX 3090 GPU.
Stats
The proposed method achieves high-quality results with minimal training process reducing computational complexity.