The researchers find that many LLMs' performance significantly diminishes when encountering unreasonable math problems, posing potential security risks. To address this, they construct the Unreasonable Math Problem (UMP) benchmark to systematically assess the model's ability in addressing unreasonable problems.
The key highlights and insights are:
LLMs exhibit inherent capability to detect unreasonable statements when directly confronted with them, but they often overlook the irrationality when solving math problems.
The researchers design a prompt template called Critical Calculation and Conclusion (CCC) to stimulate the model's self-evaluation and critical thinking abilities. This method helps the model identify and rectify unreasonable problems efficiently.
Experiments show that the CCC prompt outperforms both the direct query and Chain of Thought methods across various model sizes, demonstrating the effectiveness of this approach in enhancing the model's reasoning capabilities.
The researchers categorize two types of errors - "explicit errors" that are identifiable through textual examination, and "implicit errors" that require computational discovery. This distinction highlights the complexity of automatically evaluating question reasonableness.
The study underscores the importance of ensuring the safety and reliability of LLMs, especially in practical scenarios like intelligent education, where unreasonable responses may impact the worldview formation of children.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Jingyuan Ma,... at arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19346.pdfDeeper Inquiries