Основные понятия
The proposed Mixed Prototype Consistency Learning (MPCL) framework enhances semi-supervised medical image segmentation by leveraging an auxiliary network to generate mixed prototypes, which are then fused with labeled and unlabeled prototypes to generate high-quality global prototypes that optimize the distribution of hidden embeddings used in consistency learning.
Аннотация
The paper introduces a novel Mixed Prototype Consistency Learning (MPCL) framework for semi-supervised medical image segmentation. The key highlights are:
MPCL integrates a Mean Teacher structure and an auxiliary network to address the limitations of previous prototype-based methods, which suffer from small quantity and low quality of prototypes.
MPCL introduces mixed prototypes generated by the auxiliary network, which contain additional semantic information. These mixed prototypes are fused with labeled and unlabeled prototypes to enhance their expressiveness and optimize the quality of global prototypes.
The fused global prototypes better represent the distribution of feature embeddings, improving their effectiveness in the consistency learning process.
Extensive experiments on the left atrium and type B aortic dissection datasets demonstrate MPCL's superior performance compared to state-of-the-art semi-supervised medical image segmentation approaches.
Ablation studies are conducted to analyze the impact of various components, including data augmentation techniques, prototype fusion steps, fusion coefficients, feature extraction layers, and consistency loss functions.
Статистика
The paper reports the following key metrics:
Dice coefficient (Dice)
Jaccard Index (Jac)
95% Hausdorff Distance (95HD)
Average Symmetric Surface Distance (ASD)
Цитаты
"The proposed Mixed Prototype Consistency Learning (MPCL) framework enhances semi-supervised medical image segmentation by leveraging an auxiliary network to generate mixed prototypes, which are then fused with labeled and unlabeled prototypes to generate high-quality global prototypes that optimize the distribution of hidden embeddings used in consistency learning."