Toolkit
Explainability Lab
Model Explainability Lab
Use SHAP and LIME to understand how models make predictions.
Input Text
LIME Explanation
Word colors show contribution: green = positive, red = negative
SHAP Feature Importance
Aspect-Based Sentiment
Counterfactual Explanations
What minimal changes would flip the prediction?
Try: