Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic
The author proposes a Logical Thoughts (LoT) prompting framework to improve zero-shot chain-of-thought reasoning in large language models by leveraging principles rooted in symbolic logic, particularly Reductio ad Absurdum.