LLM-based classifiers effectively detect hallucination and coverage errors in retrieval augmented generation for controversial topics.