Large language models can generate self-contradictory content, revealing issues of non-factuality, which can be effectively detected and mitigated using logical reasoning.