Generalization Error Bounds for Supervised Learning Algorithms via Auxiliary Distributions
This work proposes a novel Auxiliary Distribution Method (ADM) to derive new upper bounds on the expected generalization error of supervised learning algorithms. The bounds are expressed in terms of various information-theoretic measures such as α-Jensen-Shannon divergence and α-Rényi divergence, which offer advantages over existing mutual information-based bounds.