Toolkit chevron_right Explainability Lab

Model Explainability Lab

Use SHAP and LIME to understand how models make predictions.

edit_note Input Text

LIME Explanation

Word colors show contribution: green = positive, red = negative

bar_chart SHAP Feature Importance

category Aspect-Based Sentiment

compare_arrows Counterfactual Explanations

What minimal changes would flip the prediction?

Try: