This study compares the performance of various deep neural methods for aspect-based sentiment analysis (ABSA) on two benchmark datasets - Restaurant-14 and Laptop-14.
The key highlights and insights are:
LLaMA 2, a second-generation open-source large language model, was fine-tuned using 4-bit quantization via Parameter-Efficient Fine-Tuning (PEFT) techniques like QLoRA. However, it only achieved middling performance.
The SETFIT framework, which enables efficient and prompt-free few-shot fine-tuning of Sentence Transformers, was explored. The fine-tuned LaBSE models demonstrated the best overall performance, outperforming other sentence transformer combinations.
The FAST LSA model implemented on the PyABSA framework achieved the highest accuracy of 87.6% and 82.6% on the Restaurant-14 and Laptop-14 datasets respectively. However, it did not surpass the reported accuracy of the LSA+DeBERTa-V3-Large model.
The study highlights the importance of innovative methodologies such as fine-tuning techniques, prompt-free few-shot learning, and modular frameworks in advancing natural language processing tasks like aspect-based sentiment analysis.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Dineth Jayak... klo arxiv.org 10-03-2024
https://arxiv.org/pdf/2407.02834.pdfSyvällisempiä Kysymyksiä