The paper introduces a novel paradigm called "One-Prompt Segmentation" for universal medical image segmentation. The key idea is to train a foundation model that can adapt to unseen tasks by leveraging a single prompted sample during inference, without the need for retraining or fine-tuning.
The authors first gather a large-scale dataset of 78 open-source medical imaging datasets, covering a wide range of organs, tissues, and anatomies. They then train the One-Prompt Model, which consists of an image encoder and a sequence of One-Prompt Former modules as the decoder. The One-Prompt Former efficiently integrates the prompted template feature with the query feature at multiple scales.
The paper also introduces four different prompt types - Click, BBox, Doodle, and SegLab - to cater to the diverse needs of medical image segmentation tasks. These prompts are annotated by a team of clinicians across the dataset.
The authors extensively evaluate the One-Prompt Model on 14 previously unseen medical imaging tasks, demonstrating its superior zero-shot segmentation capabilities compared to a wide range of related methods, including few-shot and interactive segmentation models. The model exhibits robust performance and stability when provided with different prompted templates during inference.
The paper highlights the significant practical benefits of the One-Prompt Segmentation approach, including its user-friendly interface, cost-effectiveness, and potential for building automatic pipelines in clinical settings.
Para Outro Idioma
do conteúdo original
arxiv.org
Principais Insights Extraídos De
by Junde Wu,Jia... às arxiv.org 04-12-2024
https://arxiv.org/pdf/2305.10300.pdfPerguntas Mais Profundas