Proposing a three-stage framework, CFR, leveraging SAM for semi-supervised 3D medical image segmentation, achieving significant improvements in annotation efficiency and performance.
The core message of this article is to propose a novel uncertainty-aware evidential fusion-based learning framework for semi-supervised medical image segmentation, which integrates evidential predictive results from mixed and original samples to reallocate the confidence degree and uncertainty measure of each voxel, and further designs a voxel-level asymptotic learning strategy to guide the model to focus on hard-to-learn features.
The core message of this article is to propose an Evidential Tri-Branch Consistency learning framework (ETC-Net) that employs three branches - an evidential conservative branch, an evidential progressive branch, and an evidential fusion branch - to effectively leverage both labeled and unlabeled data for semi-supervised medical image segmentation. The framework integrates evidential learning, uncertainty guidance, and evidential fusion to address critical issues such as prediction disagreement and label-noise suppression in cross-supervised training.
A novel semi-supervised medical image segmentation framework, DiHC-Net, that leverages diagonal hierarchical consistency learning between multiple diversified sub-models to effectively utilize scarce labeled data and abundant unlabeled data.
CrossMatch, a novel framework that integrates knowledge distillation with dual perturbation strategies - image-level and feature-level - to improve the model's learning from both labeled and unlabeled data, significantly surpassing other state-of-the-art techniques in standard benchmarks.
The proposed Adaptive Bidirectional Displacement (ABD) approach mitigates the constraints of mixed perturbations on consistency learning, thereby enhancing the upper limit of consistency learning for semi-supervised medical image segmentation.
A novel semi-supervised learning framework, termed Progressive Mean Teachers (PMT), is proposed to generate high-fidelity pseudo labels by learning robust and diverse features in the training process.
The paper proposes a novel Compound Multi-Attention Transformer (CMAformer) architecture that synergizes the strengths of ResUNet and Transformer models, and introduces a Lagrange Duality Consistency (LDC) Loss for semi-supervised learning to address the long-tail problem in medical image analysis.
Integrating manifold information into semi-supervised learning methods significantly improves the boundary accuracy of medical image segmentation, especially when limited labeled data is available.
This research paper introduces AIGCMatch, a novel semi-supervised learning framework that leverages attention-guided perturbations at both the image and feature levels to improve the accuracy and efficiency of medical image segmentation models, particularly in scenarios with limited labeled data.