The paper proposes a method that combines the advantages of self-supervised learning and semi-supervised learning to address the challenge of limited labeled data in medical image classification. The key aspects of the proposed approach are:
Pre-training: The BYOL (Bootstrap Your Own Latent) self-supervised learning method is employed to pre-train the model on large amounts of unlabeled medical data. This allows the model to capture useful representations and semantic information from the unlabeled data.
Fine-tuning: The pre-trained BYOL model is then fine-tuned using a smaller labeled dataset to construct a neural network classifier. This involves generating pseudo-labels for the unlabeled data and combining them with the labeled data to further optimize the model.
Iterative Training: The fine-tuned model undergoes iterative training, alternating between fine-tuning on labeled data and generating pseudo-labels for unlabeled data. This process enhances the model's generalization and accuracy in the target medical image recognition task.
The experimental results on three different medical image datasets (OCT2017, COVID-19 X-ray, and Kvasir) demonstrate that the proposed approach outperforms various existing semi-supervised learning methods, achieving significantly higher classification accuracy. This highlights the effectiveness of integrating self-supervised BYOL into semi-supervised learning for medical image recognition, especially in scenarios with limited labeled data.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Hao Feng,Yua... om arxiv.org 04-17-2024
https://arxiv.org/pdf/2404.10405.pdfDiepere vragen