Assessing Generalization Capacity of Deep Learning Models Through Separability of Unseen Classes
Deep learning models can achieve high classification accuracy on seen classes, but their ability to generalize to unseen classes varies significantly across architectures. This work proposes a separability-based approach to quantify a model's generalization capacity by examining the latent embeddings of unseen classes.