This study presents the All in One (AiO) model, a two-stage approach that leverages a backbone network and Generative Pre-trained Transformer (GPT) models to effectively handle all Aspect-Based Sentiment Analysis (ABSA) sub-tasks, even with limited training data.
Aspect-based sentiment analysis (ABSA) faces new challenges in the era of generative language models, necessitating a re-evaluation of existing assessment methodologies to ensure accurate and reflective evaluations.
A simple and novel unsupervised approach to extract aspect-oriented opinion words and assign sentiment polarity without relying on labeled datasets.
This study evaluates three innovative NLP approaches - LLaMA 2 fine-tuning with Parameter-Efficient Fine-Tuning (PEFT), SETFIT for efficient few-shot fine-tuning of Sentence Transformers, and FAST LSA on PyABSA framework - for aspect-based sentiment analysis on benchmark datasets.
Incorporating auxiliary sentences with predicted aspects significantly improves the performance of cross-domain aspect-based sentiment analysis models, enabling accurate sentiment classification even without domain-specific training data.