NLP
Attention Mechanisms in Sentiment Analysis
Analysis of attention mechanism variations in recurrent neural networks for sentiment prediction.
Course: Quantitative Methods and Models

Objectives
- 1Evaluate the effectiveness of increasing or decreasing attention levels after training.
Conclusions
- Inverting the attention mechanism post-training worsens network performance considerably.
- Slightly maximizing attention post-training can yield small improvements in generalization.
- Significantly maximizing the attention mechanism worsens network performance.
Technologies
- TensorFlow
- Keras
- NLTK
- NumPy
- Matplotlib
- Seaborn
- Scikit-learn
- Pandas