Enhancing Retrieval-Augmented Generation with Conformal Prediction: A Framework for Quantifying Uncertainty in Large Language Model Responses
Retrieval-Augmented Generation (RAG) frameworks can mitigate hallucinations and enable knowledge updates in large language models (LLMs), but they do not guarantee valid responses if retrieval fails to identify necessary information. Quantifying uncertainty in the retrieval process is crucial for ensuring RAG trustworthiness.