Core Concepts
Probabilistic models can provide uncertainty estimates critical for real-world applications of semi-supervised learning, mitigating the potential issues of pseudo-label errors and improving deep model performance.
Abstract
The content explores the use of probabilistic models for semi-supervised learning (SSL), which aims to leverage both labeled and unlabeled data to train deep neural networks.
Key highlights:
Most state-of-the-art SSL methods follow a deterministic approach, while the exploration of their probabilistic counterparts remains limited. Probabilistic models can provide uncertainty estimates crucial for real-world applications.
Uncertainty estimates can help identify unreliable pseudo-labels when unlabeled samples are used for training, potentially improving deep model performance.
The author proposes three novel probabilistic frameworks for different SSL tasks:
Generative Bayesian Deep Learning (GBDL) architecture for semi-supervised medical image segmentation
NP-Match, a probabilistic approach for large-scale semi-supervised image classification
NP-SemiSeg, a new probabilistic model for semi-supervised semantic segmentation
These probabilistic models not only achieve competitive performance compared to state-of-the-art methods, but also provide reliable uncertainty estimates, enhancing the safety and robustness of AI systems.