Proposing a three-stage framework, CFR, leveraging SAM for semi-supervised 3D medical image segmentation, achieving significant improvements in annotation efficiency and performance.
The core message of this article is to propose a novel uncertainty-aware evidential fusion-based learning framework for semi-supervised medical image segmentation, which integrates evidential predictive results from mixed and original samples to reallocate the confidence degree and uncertainty measure of each voxel, and further designs a voxel-level asymptotic learning strategy to guide the model to focus on hard-to-learn features.
The core message of this article is to propose an Evidential Tri-Branch Consistency learning framework (ETC-Net) that employs three branches - an evidential conservative branch, an evidential progressive branch, and an evidential fusion branch - to effectively leverage both labeled and unlabeled data for semi-supervised medical image segmentation. The framework integrates evidential learning, uncertainty guidance, and evidential fusion to address critical issues such as prediction disagreement and label-noise suppression in cross-supervised training.
A novel semi-supervised medical image segmentation framework, DiHC-Net, that leverages diagonal hierarchical consistency learning between multiple diversified sub-models to effectively utilize scarce labeled data and abundant unlabeled data.
CrossMatch, a novel framework that integrates knowledge distillation with dual perturbation strategies - image-level and feature-level - to improve the model's learning from both labeled and unlabeled data, significantly surpassing other state-of-the-art techniques in standard benchmarks.
The proposed Adaptive Bidirectional Displacement (ABD) approach mitigates the constraints of mixed perturbations on consistency learning, thereby enhancing the upper limit of consistency learning for semi-supervised medical image segmentation.