Local Interpretability

The ability to understand why an AI model made a specific prediction for a particular input instance.

In Plain Language

Understanding why the AI made one specific decision. Not how it works in general, but why it said "no" to this particular customer on this particular day.

Why This Matters

Local interpretability supports individual-level accountability. When a customer or regulator asks why a specific decision was made, your governance framework needs to provide a clear, case-specific explanation.