A simple combination of Low-Rank Adaptation (LoRA) and Gaussian Stochastic Weight Averaging (SWAG) can effectively enable approximate Bayesian inference in large language models, improving their generalization and calibration.
Bayesian Low-Rank Adaptation by Backpropagation (BLoB) jointly estimates the mean and covariance of the variational distribution of large language model parameters during fine-tuning, improving generalization and uncertainty estimation.