An automated deep learning method that can accurately reconstruct the 3D model of the liver and estimate its volume using just three partial ultrasound scans, without requiring the full view of the organ.
Segment Anything Model 2 (SAM 2) demonstrated promising zero-shot performance in segmenting certain abdominal organs in CT scans, particularly larger organs with clear boundaries, but struggled with smaller and less defined structures.
자유 호흡 상태에서 상체 자기공명 지문 기법을 이용하여 수분 T1 및 지방 분율을 정량화하고, 호흡 운동에 의한 영향을 보정하는 방법을 제안하였다.
An ensemble of deep learning models, including UNet, ResNet, EfficientNet, and VGG, achieves superior performance in segmenting the left and right atria and their walls from late gadolinium-enhanced cardiac MRI data of atrial fibrillation patients.
Leveraging manifestations as semantic proxies, the ManiNeg framework enhances hard negative sampling in contrastive learning, leading to more informative representations for improved mammography classification.
This study proposes a workflow for automated segmentation of pathological lesions in whole-body PET-CT images, contributing to the AutoPET 2024 challenge. The approach involves image preprocessing, tracer classification, and lesion segmentation using deep learning models.
An interactive AI agent leveraging large language models and computer vision can effectively detect newborn auricular deformities and educate the public, enabling early intervention for better treatment outcomes.
See-Mode Technologies' AI-powered thyroid ultrasound analysis and reporting software has received FDA clearance, providing automated detection, characterization, and classification of thyroid nodules to enhance radiologist performance and streamline thyroid ultrasound reporting.
알츠하이머병 환자의 해마 모양 비대칭성은 국소적인 차이를 보이며, 이를 정량화하면 질병 진행 과정에서의 해마 모양 변화를 이해할 수 있다.
Automated and robust lesion segmentation in PET/CT imaging can be achieved by incorporating tracer-specific characteristics and anatomical knowledge into deep learning models.