The paper proposes a framework for controllably generating high-fidelity and diverse diabetic retinopathy (DR) fundus images, thereby improving classifier performance in DR grading and detection. The key highlights are:
The authors modify the vanilla StyleGAN model into a conditional structure to generate retinal fundus images of desired DR grades.
To introduce greater diversity in the generated images, the authors utilize the SeFa algorithm to unsupervisedly identify semantically meaningful concepts encoded in the latent space. These concepts are then leveraged to manipulate specific image features such as lesions, vessel structure, and other attributes.
The synthesized images from both the conditional StyleGAN and SeFa-based manipulation are combined with real data to train a ResNet50 model for DR analysis.
Extensive experiments on the APTOS 2019 dataset demonstrate the exceptional realism of the generated images and the superior performance of the classifier compared to recent studies. Incorporating synthetic images into ResNet50 training for DR grading yields 83.33% accuracy, 87.64% quadratic kappa score, 95.67% specificity, and 72.24% precision.
The authors also propose a novel, effective SeFa-based data augmentation strategy, which significantly enhances the classifier's accuracy, specificity, precision and F1-score in DR detection to 98.09%, 99.44%, 99.45%, and 98.09%, respectively.
На другой язык
из исходного контента
arxiv.org
Дополнительные вопросы