Core Concepts
The core message of this work is to address the projection bias problem in generalized zero-shot learning (GZSL) by introducing a parameterized Mahalanobis distance metric to improve classification performance on both seen and unseen classes.
Abstract
The content discusses the problem of projection bias in generalized zero-shot learning (GZSL) and proposes a novel approach to address it.
Key highlights:
GZSL aims to recognize samples from both seen and unseen classes using only seen class samples for training. However, GZSL methods are prone to bias towards seen classes due to the projection function being learned from seen classes.
The authors propose to learn a parameterized Mahalanobis distance metric to counteract the performance degradation caused by projection bias.
They extend the VAEGAN architecture with two branches to separately output the projection of samples from seen and unseen classes, enabling more robust distance learning.
A novel loss function is introduced to optimize the Mahalanobis distance representation and reduce projection bias.
Extensive experiments on four datasets show that the proposed approach outperforms state-of-the-art GZSL techniques with improvements of up to 3.5% on the harmonic mean metric.
Stats
The content does not contain any key metrics or important figures to support the author's key logics.
Quotes
The content does not contain any striking quotes supporting the author's key logics.