Accurate and abundant segmentation of pulmonary arteries and veins can be achieved from non-contrast CT scans using the proposed HiPaS framework, enabling contrast-agent-free diagnosis and revealing novel associations between pulmonary vessel abundance and sex and age.
CopilotCAD introduces a collaborative framework that integrates large language models and medical image analysis tools to empower radiologists in the diagnostic process, enhancing report quality, efficiency, and trust in AI-supported systems.
This work proposes a novel framework that integrates self-supervised learning with neural ordinary differential equations (NODEs) to effectively model and predict disease progression, specifically focusing on diabetic retinopathy.
This study evaluates and compares the performance of three prominent deep learning models - Lung VAE, TransResUNet, and CE-Net - for the task of lung segmentation on X-ray images, including their robustness to various image augmentations.
Triplet Training, a novel approach combining self-supervised learning, self-distillation, and fine-tuning, significantly outperforms traditional training strategies for differentiating Alzheimer's disease and frontotemporal dementia using limited target data.
SepVAE, a contrastive variational autoencoder, effectively separates the common factors of variation between a background (healthy) dataset and a target (pathological) dataset from the target-specific factors of variation.
The core message of this article is to develop a highly efficient and interpretable deep learning-based system, called V-BreathNet, that can accurately classify lung X-ray images into normal, COVID-19, and pneumonia categories, enabling early and cost-effective detection of respiratory diseases.
A convolutional neural network that integrates an adaptive wavelet transform module can effectively learn features in both spatial and frequency domains, leading to optimized classification accuracy for ultrasound diagnosis of Graves' disease.
The core message of this paper is to enhance the understanding of 3D chest CT images by distilling knowledge from a pre-trained 2D chest X-ray expert model, leveraging language as a high-quality supervision signal to address the limited availability of paired CT-report data.
ProtoAL integrates an interpretable deep learning model based on prototypes into a deep active learning framework to address challenges of interpretability and data scarcity in medical imaging applications.