The paper studies the problem of estimating the mean of a sequence of random elements f(θ, X1), ..., f(θ, Xn), where θ is a random parameter drawn from a data-dependent posterior distribution Pn. This problem is commonly approached through PAC-Bayes analysis, where a prior distribution P0 is chosen to capture the inductive bias of the learning problem.
The key contribution of the paper is to show that the standard choice of the Kullback-Leibler (KL) divergence as the complexity measure in PAC-Bayes bounds is suboptimal. The authors derive a new high-probability PAC-Bayes bound that uses a novel divergence measure called the Zhang-Cutkosky-Paschalidis (ZCP) divergence, which is shown to be strictly tighter than the KL divergence in certain cases.
The proof of the new bound is inspired by recent advances in regret analysis of gambling algorithms, which are used to derive concentration inequalities. The authors also show how the new bound can be relaxed to recover various known PAC-Bayes inequalities, such as the empirical Bernstein inequality and the Bernoulli KL-divergence bound.
The paper concludes by discussing the implications of the results, suggesting that there is much room for studying optimal rates of PAC-Bayes bounds and that the choice of the complexity measure is an important aspect that deserves further investigation.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Ilja Kuzbors... at arxiv.org 04-05-2024
https://arxiv.org/pdf/2402.09201.pdfDeeper Inquiries