Enhancing Generalization and Uncertainty Quantification in Pre-trained Language Models through Jacobian and Hessian Regularization
Applying Jacobian and Hessian regularization to the intermediate representations of pre-trained language models can significantly improve their generalization capabilities and uncertainty quantification.