Core Concepts
In creating the ConRAP framework, the authors aim to assist non-legal stakeholders in identifying ambiguities in contractual clauses by generating clarification questions. The approach involves a retrieval-augmented prompting framework to enhance ambiguity detection and improve the quality of generated questions.
Abstract
The ConRAP framework introduces a novel legal NLP task focused on generating clarification questions for contracts to disambiguate contractual text. It addresses challenges such as data availability, contract complexity, and legal language. By combining attribute prompting and retrieval-augmented QA, ConRAP achieves an F2 score of 0.87 in ambiguity detection. Human evaluators found 70% of generated clarification questions useful. The framework enhances precision and recall in detecting ambiguities across various LLMs, with ChatGPT outperforming other baselines. Vicuna shows comparable results to ChatGPT in CQGen quality evaluation.
Key points:
Enterprises rely on commercial contracts for project-specific requirements.
Comprehending contracts is challenging due to legalese and ambiguity.
ConRAP introduces a novel legal NLP task for generating clarification questions.
The framework addresses issues like data availability and contract complexity.
ConRAP combines attribute prompting and retrieval-augmented QA for ambiguity detection.
Evaluation shows improved precision, recall, and F2 score with ChatGPT.
Human evaluators find 70% of generated clarification questions useful.
Vicuna demonstrates comparable results to ChatGPT in CQGen quality evaluation.
Stats
Experiments show that ConRAP with ChatGPT can detect ambiguities with an F2 score of 0.87.
70% of the generated clarification questions are deemed useful by human evaluators.
Quotes
"Contracts often contain ambiguously worded clauses to ensure comprehensive coverage."
"ConRAP significantly surpasses strong baselines, establishing its superior performance in ambiguity detection."