Estimating Factual Knowledge in Large Language Models: Comparing In-Context Learning and Prompting-Based Approaches
Large language models can embed factual knowledge, but reliably estimating the extent of this latent knowledge is challenging. This work proposes a novel in-context learning-based approach to estimate latent knowledge, which outperforms existing prompt-based methods in terms of reliability and performance.