This survey provides a comprehensive overview of uncertainty quantification and handling in graph learning models. It first introduces the preliminaries of PGMs and GNNs, and then discusses the sources of uncertainty, including aleatoric uncertainty from data randomness and epistemic uncertainty from model selection and training.
The survey then delves into the state-of-the-art methods for uncertainty representation. It covers Bayesian approaches, including direct inference techniques that utilize Dirichlet priors and posteriors, as well as Bayesian representation learning methods like variational autoencoders and Bayesian neural networks. These approaches model uncertainty in the graph structure, node features, and model parameters.
Next, the survey examines various uncertainty handling techniques. It discusses out-of-distribution detection methods based on distributionally robust optimization, conformal prediction for reliable uncertainty estimation, and calibration techniques to align model confidence with empirical accuracy.
The survey also covers evaluation metrics for assessing the quality of uncertainty quantification, such as negative log-likelihood, Brier score, and calibration error. Finally, it concludes by highlighting open challenges and future research directions in this important area of graph learning.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問