The Segment Anything Model (SAM) can be effectively used as a data annotation tool to generate pseudo labels for training medical image segmentation models, achieving comparable performance to fully supervised models.
Proposing a novel Multi-axis External Weights (MEW) block that effectively captures comprehensive global and local information in the frequency domain to enhance medical image segmentation performance.
The Segment Anything Model (SAM) exhibits promising performance in brain tumor segmentation, with box prompts and a combination of box and point prompts yielding the best results. However, SAM's performance is affected by the number of prompts, imaging modality, and tumor region.
RotCAtt-TransUNet++ is a novel deep neural network architecture that achieves superior performance in segmenting intricate cardiac structures, particularly coronary arteries and myocardium, by effectively capturing both inter-slice connections and intra-slice details.
The proposed Inter-Intra-slice Interpolation Network (I3Net) efficiently processes medical images by fully exploring information from high in-plane resolution and compensating for low through-plane resolution, outperforming state-of-the-art methods.
A novel deep learning-based method that utilizes a diffeomorphic transformer model to accurately estimate a deformation vector field for aligning abdominal MRI and CT images.
A cross-scale attention mechanism and a surface supervision strategy are proposed to effectively segment fractured bones from CT scans, achieving superior performance compared to conventional methods.
A novel modified attention UNet architecture with enhanced multi-class panoptic segmentation capabilities enables accurate and efficient delineation of lumbar spine vertebrae from 3D MRI data.
An automated pipeline for extracting and aligning brain image-text pairs from medical literature to enable effective pre-training of vision-language models for medical applications.
PRISM is a promptable and robust interactive segmentation model that accepts various visual prompts, including points, boxes, and scribbles, to achieve precise segmentation of 3D medical images through iterative learning and confidence-based selection.