The content discusses a promising approach to address the challenge of quantifying predictive uncertainty for many machine learning (ML) algorithms, whose Bayesian counterparts are difficult to construct or implement.
The key idea is based on the hypothesis that commonly used ML algorithms are efficient across a wide variety of tasks and may thus be near Bayes-optimal with respect to an unknown task distribution π. The authors prove that it is possible to recover the Bayesian posterior defined by π by building a martingale posterior using the algorithm.
The authors first introduce the concept of near-Bayes optimal algorithms, which satisfy an inequality relating their average-case performance to the infimum of all possible algorithms. They then show that for such algorithms that also define an approximate martingale and satisfy certain stability and efficiency conditions, the resulting martingale posterior will provide a good approximation of the Bayesian posterior defined by π in Wasserstein distance.
The authors further propose a practical uncertainty quantification method, called MP-inspired uncertainty, that can be applied to general ML algorithms. Experiments on a variety of non-NN and NN algorithms demonstrate the efficacy of their method, outperforming standard ensemble methods in tasks such as hyperparameter learning for Gaussian processes, classification with boosting trees and stacking, and interventional density estimation with diffusion models.
翻譯成其他語言
從原文內容
arxiv.org
深入探究