Core Concepts
Gradient-based explanation methods are enhanced by the Information Bottleneck-based Gradient (IBG) framework for Aspect-based Sentiment Analysis (ABSA), improving model performance and interpretability.
Abstract
Introduction:
Neural models in NLP lack interpretability.
Gradient-based explanation methods are crucial.
Preliminary Analysis:
Not all dimensions are equally significant in ABSA.
IBG framework proposed to learn intrinsic dimension.
Our Approach:
IBG explains sentiment classifier by extracting aspect-aware opinion words.
iBiL structure compresses embeddings into intrinsic dimension.
Experiment Setups:
Evaluation on four datasets with metrics like Accuracy, F1-score, AOPC, Ph-Acc.
Results and Analyses:
IBG enhances performance and interpretability.
Ablation studies confirm the importance of IB and iBiL.
Further Analysis:
Influence of compressed size and α value on model performance.
Related Work:
Studies on intrinsic dimension, gradient-based explanations, and information bottleneck.
Conclusions and Further Work:
IBG framework significantly improves ABSA models.
Future research on applying IBG to large-scale language models.
Stats
"Our comprehensive evaluations and tests provide substantial evidence of the effectiveness of the IBG framework."
"The key contributions of this paper are listed as follows."
Quotes
"Gradient-based explanation methods are increasingly used to interpret neural models in natural language processing."
"Our model is model-agnostic, we integrate it with several state-of-the-art baselines."