User Guide

Learn how to use the interactive tools in this kit.

scatter_plot Word Embeddings

This tool helps you explore semantic relationships in word vector spaces (GloVe). You can visualize how words are grouped based on meaning and perform vector arithmetic (e.g., King - Man + Woman = Queen).

How to use:

  • Visualize: Enter words to see their 2D projection (using PCA).
  • Analogies: Use the "Analogy" tab to solve word analogies.
  • Bias: Use the "Bias" tab to see gender associations with different professions or adjectives.

visibility Explainability Lab

Understand how AI models make decisions. This lab demonstrates interpretability techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations).

How to use:

  • Image Classification: Select an image to see which regions the model focuses on.
  • Tabular Data: Input data points (e.g., loan application) to see which features contributed most to the decision.

saved_search Bias Auditor

Detect and measure bias in datasets. Analyze how different groups are represented and identify potential disparities in outcomes.

How to use:

  • Load Data: Select a sample dataset or upload your own CSV.
  • Select Attributes: Choose the protected attribute (e.g., Race, Gender) and the target variable.
  • Metrics: Review fairness metrics like Disparate Impact and Equal Opportunity.

security Adversarial Sandbox

Explore how AI models can be fooled. Generate adversarial examples—inputs designed to cause the model to make a mistake.

How to use:

  • Select Image: Choose an image of a digit or object.
  • Attack: Apply noise (FGSM attack) to the image.
  • Observe: See how the model's prediction changes despite the image looking mostly the same to humans.